VOIP voice prompts

You have that spanking new PBX, Asterisk based for our environment, and you've discovered Voice Mail.

  • The Voice Prompt menu is key (*99 on my set up)
  • customised voice message prompts are easy,
  • Users can set up their own password
  • Getting that info (Internal WIKI) out is a big win for your team

There's still a little manual work with setting up each voicemail box, and configuring voice mail to email.

The beauty of one neat feature is that it leads to asking where/why that feature can't be passed along to something else?

We've set up Automated Call Distribution (aka Call Queues) and I wasn't bothered to manage the voice recording, prompts etc. Unfortunately, the company also didn't want to spend money to get the recordings done.

Pull out your Open Source toolkit and follow me to do our own voice prompts.

Recording Voice Prompts with Audicity and SOX

I followed Benoit Frigon's post and got wonderful results. Requirements:
  • voice prompt recording
  • audacity
  • 'sox'
The first read-through and copy/paste worked with poor results, so we go through again and come up with much better results/work-flow as well as minor adjustments because of our 'context.''

Got a job ticket this week to trawl through 7 days of email, and pull out incoming mail for 7 mailboxes.

2 hours after set up and ready to trawl, the work was complete (with only 30 minutes of that time being actual work, the rest just hanging around for the computer(s) to do their stuff.

Unfortunately, it was 3 hours before I could get working because I forgot some fundamentals about procmail, that were presumed understood in the original documentation. If you really want to understand the notes, then please read up and try a few procmail recipes. Otherwise, we've updated our guide for:

  • trawl the archive using procmail recipes
  • use mutt to bounce the messages to an outlook client

More Information:


[Ref: Procmail FAQ]

Now we have 6 months worth of email, and they've grown to between 30GB a month. Someone finally decides they want mail from the archives and there is a huge amount of mail to wade through.

How do we do it?

formail -s procmail recipe.rc < mailfile

There's some fancy stuff out there, but I have found procmail and formail already installed because of the archiving solution.

Recipe - Mail To/From mailclient

A user has these requirements for retrieving emails from the archives:

  • time period of 5 days (specified to us)
  • sent to a particular user (specified to us)

We collect messages by the 'day' so this part is simplified. We use the below procmail recipe for each day's email messages.

formail -s procmail /path-to/tofrom.recipe.rc < /path-to/mailfile

File: tofrom.recipe.rc

# Debugging
VERBOSE=off
UMASK=007
SHELL=/bin/sh

# http://www.iki.fi/era/procmail/mini-faq.html#from
CMDFROM="^(From[   ]|(Old-|X-)?(Resent-)?(From|Reply-To|Sender):)(.*\<)?"
CMDTO="^TO_"

## ---
## Customise Me!!
## ---
# - set MAILDIR to where you will do work (note: all relative paths go from here)
MAILDIR=/path-to/store/work
MAILCLIENT="<--emailaddress-here-->"
CMD=${CMDTO}

## ---

LOCKFILE=${MAILDIR}/procmail.lck
LOGFILE=${MAILDIR}/procmail.log
TMPDIR=/tmp/$USER

MAILBOX=${MAILDIR}/${MAILCLIENT}
DEFAULT=/dev/null

# --- Check a few paths first
:0
* ? test -d ${MAILDIR}
* ? test -d ${TMPDIR} || mkdir ${TMPDIR}
{ }

:0E
{
    # Bail out if any of the above fails
    EXITCODE=127
    HOST
}

# Deliver Mail to our file ${MAILBOX}

:0:
* $ ${CMD}${USER}
${MAILBOX}

Customisation areas are 'blocked off' above.

  • MAILDIR - set the path where files are to be written (i.e. results of the process, lock file, log file) e.g. /var/data/mail/recovery
  • MAILCLIENT - recipient (e.g. samt@example.com)
  • CMD - choose either CMDTO or CMDFROM to specify which you want (CMDTO messages sent to $MAILCLIENT or CMDFROM for messages sent from) We use ${CMDTO}
File Permissions

[Ref: Check for Permission Problems]

One aspect of procmail that is important to remember (or at least it wasted too much of my time until I re-discovered this.)

Recipe file permissions:

  • Use a path/file permission of 0640 and make sure the
  • running user 'owns' the recipe file

Mutt

[Ref: tags , Mutt, port]

We now have a 'maildir' file as /var/data/mail/recovery/samt@example.com, but our users are Windows/Outlook users.

I can't readily give them the files (which they would accept as individual EML files) because I don't have free tools for that conversion, so we use another tool mutt.

  • Using
  • T to tag messages by a pattern
  • Use Patterns
  • ~A to tag all messages
  • Using ";" Bounce b, to send all messages to the appropriate person
Stepping through

Launch "mutt" with the "-f" option to open the mail message file

mutt -f /var/data/mail/recovery/samt@example.com
q:Quit  d:Del  u:Undel  s:Save  m:Mail  r:Reply  g:Grouop  ?:Help
1  N   Month Day From-address ( size) Subject line
2  N   Month Day From-address ( size) Subject line
3  N   Month Day From-address ( size) Subject line
...
...
---Mutt: samt@example.com [Msgs:xyz Old:xyz xyzM]---(date/date)---

From the email index list, the command sequence looks this:

  • T
...
---Mutt: samt@example.com [Msgs:xyz Old:xyz xyzM]---(date/date)---
Tag message matching:
  • ~A

The Mail Index should show an "*" asterisk beside all messages

q:Quit  d:Del  u:Undel  s:Save  m:Mail  r:Reply  g:Grouop  ?:Help
1  N * Month Day From-address ( size) Subject line
2  N * Month Day From-address ( size) Subject line
3  N * Month Day From-address ( size) Subject line
...
...
---Mutt: samt@example.com [Msgs:xyz Old:xyz Tag:xyz xyzM]---(date/date)---

and the status bar should include the number of messages tagged:

To bounce a message, we would use 'b', but we want to bounce all tagged messages, and therefore precede 'b' with the semi-colon ';'

  • ;
...
...
---Mutt: samt@example.com [Msgs:xyz Old:xyz Tag:xyz xyzM]---(date/date)---
tag-
  • b
...
...
---Mutt: samt@example.com [Msgs:xyz Old:xyz Tag:xyz xyzM]---(date/date)---
Bounce tagged messages to:

and we get a prompt for whom/where we wish to bounce messages, enter our destination address

  • samt@example.net

and we get a confirmation prompt

Bounce messages to samt@example.net? ([yes]/no):
  • yes
...
...
---Mutt: samt@example.com [Msgs:xyz Old:xyz Tag:xyz xyzM]---(date/date)---
Messages bounced.

Log Management can be tough

For the umpteenth time one of our squid boxes went down due to the logs filling up all available disk space. As we have 3 sites, plus a special client access network, we have FOUR squid boxes that have been problematic for a long time.

The first major problem we had (it just seemed way too slow) was identified after trawling the logs to find the problem was always there, we need to fixup the number of available file descriptors.

grep WARNING /var/squid/logs/cache.log
YYYY/MM/DD HH:MM:SS| WARNING: Your cache is running out of filedescriptors
In this particular case, we had a lot of error messsages with the same text "Your cache is running out of filedescriptors." How many file descriptors do I have for my cashing service/daemon? Using your shell, you can determine the number of file descriptors by something like the below:
  • change the default shell to something that will give you a shell (my default shell for the _squid account is /sbin/nologin
  • use ulimit to display the process limits
This will show you what your login class settings give to your squid daemon.
# usermod -s /bin/ksh _squid
# su _squid
$ ulimit -a
time(cpu-seconds)     unlimited
file(blocks)          unlimited
coredump(blocks)      unlimited
data(kbytes)          2097152
stack(kbytes)         8192
lockedmem(kbytes)     296673
memory(kbytes)        887008
nofiles(descripters)  128
processes             1310
$ exit
# usermod -s /sbin/nologin _squid
More simply
$ ulimit -n
128
As in the earlier example, you can also use *squidclient* to view what the running *squid* process sees as available file descriptors. This view will also let you watch over time as your file descriptors deplete etc.
# squidclient mgr:info | grep -i file
File descriptor usage for squid:
       Maximum number of file descriptors:
       Largest file desc currently in use:
       Number of file desc currently in use:
       Files queued for open:
       Available number of file descriptors:
       Reserved number of file descriptors:
       Store Disk files open:
The solution? Increase the number of file descriptors available to your daemon and a clean way of doing this is the login class as described here

But where is the green sheep? What is there to fix our current persistent log over grown problems?

Insanity

I just love that saying oft attributed to Einstein defining Insanity.

Doing the same thing over and over, expecting a different result.

It's crazy that part of our solution was to just delete the old logs, and restart squid service.

Method replacing the madness

The problem we have with our Squid Caching Proxies, which is quite different from all our other services, is that we just don't have much disk space (strange how that should make one think more.)

We can't just replicate our log management strategy used on other boxes. The Squid access logs really do grow quickly on these boxes. It isn't good enough to collect a year's worth of logs, rotating them at the end of each month.

Squid's default log rotation process, just 'moves' the file, without compression. So, last month's 8GB log file, is 8GB permanently lost on the HDD. We need to compress those old log files (do we really need them?)

Looking at our log analysis ... we don't need to keep that length of archiving on disk.

Moving on?

[Ref: newsyslog(8)]

There are various ways of using newsyslog(8) to administer your squid logs, one being below

File extract: /etc/newsyslog.conf

#log file name              owner:group  mode count size when  flags options
/var/squid/logs/access.log _squid:_squid 640   12    *   $M1D0 ZB    /var/squid/logs/squid.pid SIGUSR1
/var/squid/logs/cache.log  _squid:_squid 640   12    *   $M1D0 ZB    /var/squid/logs/squid.pid SIGUSR1
/var/squid/logs/store.log  _squid:_squid 640   12    *   $M1D0 ZB    /var/squid/logs/squid.pid SIGUSR1

Confirm the location of the log files and squid.pid (the above works for some of my hosts.)

The above configuration works on keeping 12 (count) previous logs of unlimited (* size) and runs the archive/compress on the 1st day of the month ($M1D0) This strategy works for our servers with more disk space, but is going to be problematic for some of us with smaller disks.

Note: squid 3.1 and later(?) changed default creation for cache_store_log OFF.

flags
Your strategy should consider the tools you use for analysing the log files. If you can get your logs off the server on a daily basis, then you can probably go for a more frequent 'when' such as $D0 daily at midnight, same as @T00.

Examples from the manpage newsyslog(8) include:

  • $D23 every day at 23:00 pm
  • $W0D01 weekly on Sunday at 01:00 am
Options

PID and SIGUSR1

Newsyslog will send SIGUSR1 to squid, which in this case instructs squid to close, reopen the log file.

squid.conf - logfile_rotate

With the above configuration, we need to tell squid that we're using newsyslog using the $!bing("logfile\_rotate","squid.conf logfile_rotate to 0 newsyslog") value of 0

File extract: /etc/squid/squid.conf

logfile_rotate 0

I've been struggling with getting nginx and php to be friendly to each other in OpenBSD.

Read all the wonderful allocates for nginx and thought it was time to test the waters when OpenBSD embraced the web server by incorporating it into the Base build.

Successfully deployed html serving nginx, reverse proxy nginx, and now I really really need to get PHP hosting, and SSL hosting to work. All the documentation out there says it is sooo simple, but why haven't I been able to do it for the 1st 3 tries?

  • pkg_add nginx
  • pkg_add php-fpm

Ingredient #1

As always, the key thing to (L)earning something new is, not to chew off more than you can.

So, if you're going to try something new. Take a list of the minimal ingredients, and only work with those ingredients.

Like the fool I am, I tried the new install using a mix of the base install of nginx with the package install.

  • nginx is in the base install of 5.2
  • unless you really know what you're doing, just use that.

Ingredient #2

Basics Works.

Keep your above, simplified environment, sane.

Don't be like me, where I'm mixing things randomly.

Example:

  • install nginx web server package
  • install php-fpm package for PHP FastCGI
  • /etc/rc.d/nginx start (base install) works (typo)
  • change configurations
  • /etc/rc.d/enginx restart (ports install) everything html works, php breaks.
  • confirm /etc/php-fpm.conf has chroot enabled
  • /etc/rc.d/enginx restart everything breaks
  • read documentation, ooops nginx package is not yet chroot
  • edit /etc/php-fpm.conf has chroot disabled
  • /etc/rc.d/enginx restart everything breaks

When I finally realised:

  • php-fpm configuration sample is chroot
  • nginx base install is chroot
  • nginx package install is not chroot

My random behaving system isn't so random after all. It just took a long time to realise this.

The php-fpm sample configuration file is chroot (just like the default Apache 1.3 chroot base install.)

Ingredient #3

Don't get ahead of myself. Why am I installing the nginx package?

  • There's mention that there are some features supported in ports that aren't available in the base install
  • There's a new version on the web, and I may need some of those features?
  • My install is going live and will need to add features along the way, do I need those non-base install features?

OpenBSD systems are relatively easy to rebuild, reinstall, just build the basic, feature complete system you require and nothing more.

Trawling the mail archives

A cheap archive is only as good as getting back information from that archive.

We built a Mail Archiving solution using a spare VM box, disk space, OpenBSD and Postfix, and Procmail. but it isn't that useful if all you're going to do is put to tape and tell everyone you have the archive.

How do you actually make use, trawl, the archives and retrieve information from the archive when users have a bad mail day and need to retrieve mail that you have hidden on that tape?

The basics of our configuration is we have a separate machine (the archiving box.)

Postfix recieves mail

Postfix as the Mail Transport Agent (MTA) is configured to:

  • Forward all messages to their destination
  • Accept all mail from mail server(s)
  • BCC Deliver a copy of the message to a local account
  • Forward all mail to the next destination: oblivion

Procmail for archiving

For our local mail delivery, the local user account forwards processing to procmail.

Procmail stores the messages in a predefined folder/filename structure that meets our business archiving needs (e.g. year/month/day)

Trawling the archives

And we finally get to the strategies for making use of those archives.

I've been vacillating on a search engine installation, originally drooling over htdig and various failed install attempts, to Apache Solr gaining traction and recently documented for FreeBSD by BSDMag

I still need to use a search engine at some point, because the archives are growing to 30GB+ a month, but in the meantime I got a job to extract mail for userX over 5 days.

I've documented how I did that in trawling the archives and it boils down to using procmail and formail (part of the procmail package) to wade through the messages and suck out the messages that met my criteria (to userX).

We already had the messages separated by date, so it was just a matter of feeding those days of messages into my procmail recipe and getting the mail that our user wanted.

Once the recipes are built, the whole process is relatively fast and pain free. We even have a work-around for getting that archive mail to our Outlook friends.

Our recipe is rather simple, but it highlights the flexibility you have to trawl the arcives with your own dig(solr)ing into the procmail recipe book.

Michael W Lucas' book: SSH Mastery: OpenSSH, Putty, Tunnels and Keys.

Good enough that I avoided buying the book, even when it was released with funding support my favourite Open Source project (OpenBSD with OpenSSH.) Good enough that after recieving a blogger review copy the first thing I did was to hit the corporate buy button to order a legitimate print/e-book copy for my cohort, fellow sysadmin, users. Why?

I was under some insane self delusion that I didn't want to be bound by the book's research, so that I can ethically 'document' my own stumbles into SSH to share freely with others. Fortunately, a short look at the books contents and the better solution for users and System Administrators, is to just buy this book.

What value is there in this book:

  • The Guru in the room
  • Saving Money
  • Augmented Reality (extending your infrastructure)
  • Saving Time

The Guru in the room

We don't know what we don't know.

The fastest path of learning I've enjoyed has been as the new kid amongst 'zen masters' who danced on their keyboards making magic happen across our network(s). Unfortunately the real masters moved on and we graduate a little higher up the ladder until we've reached the peak of our incompetence.

The book is a good reference source, with fine examples for many features, and like the zen masters, some of the answers is in the 'debug' sections, how to determine whether what you think you should get, is how SSH is seeing it.

Online articles are often short, make assumptions about how OpenSSH/Putty works, 'script' a lot of commands that require version X.Y of this and M.N of that. Rarely are there supportive notes on how to diagnose the instructions, or related system has response.

SSH Mastery explores, explains, provides samples, provides debugging techniques so we can explore, understand, type-in the SSH commands to see all those features at work. Not the guru in the room, but the next best thing, someone knowledgeable to go to.

Saving me money?

  • Chapter 3: The OpenSSH Server
    • Testing and Debugging

A technical configuration to start in a book? After the general introduction to the topic, data encryption, it seemed odd to dive into configuring the server?

I was hoping for magical command-line tricks. But it is understated how critical it is to configure your server correctly, and how to validate the server is working correctly: debug

4 years ago I was locking down a machine in the USA (from Australia.) I'd spent a month configuring some complicated Mail Processing system on that box, and was almost ready for the 'live' output. The only thing left to do was formalise the lock down of the machine.

2 minutes later, I'd locked myself out with a typo in my ssh server configuration. After ripping my hair out, I found the answer (documented in Chapter 3) and published it online and @serverfault.com

That lost server, lost time, lost configuration was throwing money out the door.

Augmented Reality (a flexible and secure infrastructure)

SSH Tunnels have many uses, but I have always found it difficult to follow the manpage ssh. SSH Tunnels lets us augment, extend our existing network/infrastructure in ways the physical configuration would not allow.

  • Chapter 9: Port Forwarding
    • Services on localhost
    • The web from somewhere else

We tunnel extensively at work to let us run services on Unix hosts, but lock those services down for access only from localhost (i.e. a legitimate user account, using SSH Keys is required to tunnel onto the machine and using port forwarding download e-mail (which contains a lot of diagnostic information, system reports) et. al. onto our monitoring host.

Automation scripts/.fetchmailrc configuration files get forgotten, we're always falling back to documentation when its time for upgrades and changes on our network.

As mentioned, tunnels tend to be hard to understand (and the command-line ordering can still confuse those who've been using it.) SSH Mastery is a good introduction, with good examples, and a good connection

I was in Tonga over the Christmas break when I needed to do some funds transfers on some accounts in Australia, but the internet awareness/security doesn't allow any transfers from an IP Address from Tonga.

Thanks to OpenSSH, Putty and socket routing, Christmas ended well.

Saving time.

Is SSH Mastery comprehensive? Not nearly, which is good. There's still a lot out there) waiting for your articles. It does however cover a lot of things that I haven't been considering, and need to within my day job and home network.

The Guru in the book definitely covers a lot of things that I now use daily, because others better than myself were "doing it" and quickly led me in the right direction.

  • Chapter 4: Verifying Server Keys
  • Chapter 5: SSH Clients
  • Chapter 6: Copying Files over SSH
  • Chapter 7: SSH Keys
  • Chapter 8: X11 Forwarding
  • Chapter 9: Port Forwarding
  • Chapter 10: Keeping SSH connections Open
  • Chapter 11: Host Key Distribution
  • Chapter 12: Limiting SSH
  • Chapter 13: SSH Virtual Private Networks

Some of the information seem so basic now, after years of stumbling through them but the details and exploration helps to clarify my own understanding. Some areas I don't use, I should know, and now I have a reference that tells me some of what I need to attend.

Summary

Even if you have some one with patience and wizard knowledge to help you with this fundamental tool, I'm finding this book useful. It is a great investment for both end-users, system administrators and developers.

Refer to other reviews on the web for the utility of this title:

One of our clients was having serious problems with installing and getting Microsoft Lync to work. The previous Support organisation spent a couple of months on the problem and gave up, but the user never gave up.

When we took on the contract to provide support, our support technicians could get the accounts to work outside the customers environment, and intermittently at the customers site.

  • Web Proxying
  • Firewalls
  • Solution

After working with every possible iteration of installing, uninstalling, configuring the software, using separate desktops, different versions of Windows.

Web Proxying

Suggestions were made that the problem was with the web proxy deployed at the customer site, but we had success at other sites with proxies, and the software was failing login even when bypassing the proxy.

Again, the problem only occured at the client site.

Firewalls

Suggestions abound in a corporate environment, that whenever an Internet service fails it's something to do with the Firewalls.

To validate the user assumptions, special rules were inserted to allow the clients full, unfettered access to the Internet.

Still no success. Again, other sides have both proxies and firewalls restricting Internet access (except via the proxies) and they work fine.

Solution

Where to go after the basics clearly show something is broken, but not what?

A packet trace shows the connection query from the client going out and then no further connection attempts?

Turns out the DNS A record for the service hosting for Microsoft Lync (a Hosted Service for the client) didn't exist, so the client stopped processing at some point because it couldn't find a place to connect to (our guesstimate at this time.)

Manually insert a DNS record for the external site, on our DNS server, and magically the Lync client connects.

Wow, amazing how many things depend on DNS, and how even large companies with large budgets can screw it up. The sad irony is that the client is a reseller for the big ISP that hosts Microsoft Lync services, and it was that ISPs DNS server that was screwed. The ISPs DNS records for their resellers are apparently not responding with the same records as for external users.

Remember your network kung-fu.

It bothered me enough that I need to record it, and hopefully the path to a solution that others will follow.

(delivery temporarily suspended: Server certificate not verified)

Lesson: Document things properly, especially if it's something interesting, more so if the technology/thing you're doing is normally not what you do, and it's already taken you a long while to get it working properly in the first place.

Mind you, the above may be a difficult task when rushed to get a system out and the only way to confirm the installation is to break it apart and start from scratch

Scenario:

We exchange e-mail with an external organisation (duh!!) with regulatory standards that requires us to ensure e-mail sent to them is encrypted. We achieve this through the following:

  1. Certify that the server we're connecting to is theirs by using:
    • using SSL certificates
    • smtp_tls_policy_maps and
    • fingerprinting
  2. Encrypt the traffic between the two sites using TLS

So, we follow the online Postfix TLS Support and smtpd_tls_fingerprint documentation and have it up and running with the basic configuration:

File extract: /etc/postfix/main.cf

smtp_tls_policy_maps = hash:/etc/postfix/tls_policy

File extract: /etc/postfix/tls_policy

example.com    fingerprint
    fingerprint-digest-is-here

Problem:

External Organisation used a 1 year self-sign certificate, it expires (as most eventually do) and no messages go through them. We get the below "cryptic" message in our logs:

(delivery temporarily suspended: Server certificate not verified)

Answer:

Seems easy enough, we just need to re-do/fix our 1st step above for Certifying the connection.

  1. Get updated certificate from remote site
  2. Update the fingerprint

Load up the online documentation and follow it through.

Oooops, it doesn't work.

The logs laugh: /var/log/maillog

(delivery temporarily suspended: Server certificate not verified)
  1. The message is not sent (deferred) with the error message "Server Certificate not verified".
  2. The message is never sent, since the Server Certificate is never validated.
  3. Bypass certification and send e-mail. The short-term configuration is to not require the fingerprint to be 'certified'.

I'm sure I followed the steps correctly ... (wrong)

Solution:

Walk away from the documentation for a while, walk through it again with the presumption that you've screwed everything up so you need to take all your knowledge and check the basics (verify assumptions) as you go along.

  • digest format
  • fingerprint
  • policy file
Digest Format

[smtp_tls_fingerprint_digest]

Verification of an SMTP server certificate fingerprints, uses a message digest.

Don't get trapped putting together fingerprints that are invalid, or unnecessary. Find out which fingerprint digest is supported by your configuration, and use that.

postconf | grep fingerprint
lmtp_tls_fingerprint_digest = md5
smtp_tls_fingerprint_digest = md5
smtpd_tls_fingerprint_digest = md5

The above configuration output shows we're using the MD5 digest format. It should be fine, but read the documentation about what it says may be the better choice digest for you.

Fingerprint

[Ref: openssl x509]

After acquiring getting your SSL Certificate through some 'trusted' method, generate the fingerprint for the 'trusted' certificate in the following method.

openssl x509 -noout -fingerprint -md5 -in /etc/ssl/certs/example.pem
MD5 Fingerprint=fingerprint-digest-is-here

After comparing the above fingerprint-digest-is-here with what I have in the tls_policy file, it is obvious they don't look anything similar.

Policy File

With the above fingerprint, and digest we can fix the TLS Policy table such as the below:

example.com    fingerprint
    fingerprint-digest-is-here

Remap the file to make sure the correct hashed version is active:

# postmap /etc/postfix/tls_policy

Restart the server and things are coool.

postfix reload

But isn't that what the Postfix documentation says you have to do?

I guess it does, but for some reason the steps I took those days weren't the correct steps. And now that I've rehashed the already hashed, I hopefully will not mis-read the documentation the next time through.

TLS and Postfix

Upgrading some of our Mail Servers to support for TLS (Transport Layer Security) in Postfix and apart from learning how to do it, also learned a key maxim of programmers (readily applicable to system administrators)

DO NOT PRE-OPTIMISE

Wasted two days of my life, with increased anxiety during the install, configuration process because I was trying to be too smart too early.

After a Duhhh moment, I went back to the very beginning of the install process, and did everything as per the known guides (without that little tweak I had preconceived, and the install worked in less than an 1 hour)

My failure? I got too far ahead of myself, with bright ideas, untested of how I wanted things to work, and started modifying my plans (and solidifying assumptions about how things will work) before collecting evidence for that the assumptions for each stage, were valid.

My idea was for the TLS roll-out on 5 different servers (all requiring SSL certificates) could all use one Certificate Authority. I'd made self-signed certificates before, so presumed/guessed at an approach for one centralised Certificate Authority. Unfortunately, instead of verifying my assumptions of how that can be done, I steam-rolled ahead ass-uming some minor modifications to the process would just work.

  1. Create Certificate Authority (CA) key
  2. Create Certificate Signing Request (CSR) for the host
  3. Create a Certificate (CRT) from the CSR, signed by my new the CA key

The install failed, but gave error messages hinting at problems with the key created in my step #2, or the certificate created in step #3. After agonising through different diagnostic processes from the various error messages. It took 2 whole days to throw away the assumption that caused the error, my change in how I was generating (or using a Certificate Authority.) Arggghhhh!!!

I had been blindly looking at various avenues for why Step #2 or Step #3 were not working correctly, including trying stupid hints from random websites.

The error that Postfix was throwing up said that:

File extract: /var/log/maillog

warning: cannot get RSA private key from file /etc/ssl/private/server.key.pem:disabling TLS support
warning: TLS library problem: xxxxxx certificates routine xxxx key values mismatch xxxxx src/crypto/x509/x509_cmp.c:318:
  1. Can't read the Key
  2. There is no match between the key and certificate

OK, the key file is there, I can see it in the file system. I can open it up with openssl and verify that it is a valid key file by using:

sudo openssl rsa -noout -text -in /path-to/private/server.key.pem

I could even validate that the signed certificate is a valid certificate, likewise the Certificate Authority certificate (so far as our current understanding tells us.)

sudo openssl req -noout -text -in /path-to/server.crt.pem
sudo openssl req -noout -text -in /path-to/private/ca.crt.pem

I blissfully ignore the 2nd error message until I could resolve why my Postfix server was complaining about the Server Key. The assumption, it's probably an 'artifact,' an error caused by the previous error (can't open the key.) We find all sorts of "solutions" on the web, which may work on other OS's, but irrelevant for our OpenBSD install (most related to using 'openssl rsa -in server.key.pem -out server.key.rsa.pem to make sure that the key file is not password protected ?) Not relevant for our OpenBSD install.

It was well into the third day before I found references to verifying that a certificate is created from a key.

$ sudo openssl rsa -noout -text -in /path-to/private/server.key.pem -modulus \
    | grep ^Modulus | openssl md5
$ sudo openssl x509 -noout -text -in /path-to/server.crt.pem -modulus \
    | grep ^Modulus | openssl md5

The use of "| openssl md5" just simplifies the comparison of the Modulus values which are supposed to be the same if they are paired (i.e. certificate was generated from the key.) There's also the requirement that both "public exponent" are equal but the above Modulus comparison is a quick verification process.

OK, I'm running the above command line on my self-signed certificate, and server key. The Modulus DO NOT MATCH.

What?? That doesn't make sense?

I wander through comparisons of all the key & certificate pairs, to find out that the Modulus for my designated CA Key, matches with the Self-Signed Certificate.

What?? That doesn't make sense?

Obviously (duhh) there must be something wrong with my signing process. We trace back our implementation steps and re-do, re-test.

  • Step #3. No that didn't work. No, don't repeat it again. Go back to
  • Step #2 then #3. No that didn't work. No, don't repeat it again. Go back to
  • Step #1 then #2, then #3. No that didn't work.
OK, something is seriously wrong!!!

The 2nd error (and quick perusal into the source code) definitely indicates that the key file is not related to the certificate. Our Modulus investigations above shows that the key/certificate pairs are not created correctly. Could my CA ideas be the cause of my install failures?

Throw that assumption away and create certificates how you've always done it.

  • Step #2 Sign the CSR using the Server Key.

Normal self-signed instructions always use the same key for the CA as well as the Server.

5 minutes later, we have Postfix TLS working as expected, and our documentation is complete. Postfix TLS without dovecot, without cyrus-sasl, woohoo, too easy.

Now to verify that TLS actually encrypts ?

As networks continue to grow, sometimes against our wishes, sometimes with our full support, it becomes more important to get some overview of how and what is moving across your network(s.)

In the beginning, in a land far away, we only had a few machines wired up and life was simple.

Now, most of us have too many machines with an unknown quantity of malware pounding on them (and subsequently on your network.) That's before we even get to our beloved users.

If you get blamed when things go bad on your network, it's time you started taking charge of knowing what's going across your network. Michael W. Lucas' published an insightful book to help us with that Network Flow Analysis. More importantly, for us, is that he chose to describe the solution using tools accessible to everyone (aka Open Source.) We've finally cleaned up some internal notes for getting the software to work well in our favourite os (tm) OpenBSD

These notes augment the installation instructions from that book. Where the human factor is important, in customisation/localisation, interpretation, we don't do any of that here.

Buy the book.

Now you're back, follow through to find out how we put it together for Netflow with flow-tools

It's saved our bacon a number of times, we know who's packets are causing congestion, what times congestions occur, why things occur. AND, we can print out those meaningless charts that senior dweebs nod their heads and just love.

Michael W. Lucas has some war stories where traffic flow monitoring has helped him out, and we can attest to it's daily, weekly value.

Our notes on Netflow with flow-tools