WARNING: technical content ahead! There’s also a tonne of config files, which make this page look longer than it really is, but hopefully they’ll help other people who want to do similar work.
Back in 2017 I described how to build a home router based on CentOS 7.
C7 is now out of date, so I figured it was time to rebuild it, this time using Rocky Linux 9.
Previously I described a relatively modern set of TLS settings that would give an A+ score on SSLtest. This was based purely on an RSA certificate.
There exist another type of certificate, based on Elliptical Curve cryptography. You may see this referenced as ECC or, for web sites, ECDSA. An ECDSA certificate is smaller than an RSA cert (eg a 256bit ECDSA cert is roughly the equivalent of a 3072bit RSA one).
Back in 2016 I documented how to get an A+ TLS score.
With minor changes this still works.
But times have changed. In particular older versions of TLS aren’t good; at a very least you must support nothing less than TLS1.2.
Consequences of limiting to TLS 1.2 or better If you set your server to deny anything less than TLS 1.2 then sites like SSLlab tell us that older clients can no longer connect.
As I was rebuilding my network I came across a problem.
In my basement I had previous run a cable from my core switch around the room to the other side, where I had a small 100baseT switch to handle the equipment on that table. I’d also run another cable across the ceiling to the back of the house, where I had the Powerline network.
Everything seemed to be working fine, and it had been doing so for years.
3 years ago I replaced OpenWRT with a home grown router.
It’s worked pretty well, but I wanted to take advantage of improvements in networking (5Ghz!) and also improve coverage. This kinda became important due to COVID lockdown and Work From Home. My library, where I was working from, had very weak network signal. I needed to do better.
So I decided to look at turning off in the inbuilt WiFi and use an external WAP (Wireless Access Point, or just AP).
WARNING: technical content ahead! There’s also a tonne of config files, which make this page look longer than it really is, but hopefully they’ll help other people who want to do similar work.
A few months back I replaced my OpenWRT router with a CentOS 7 based one. This machine has been working very well, and handles my Gigabit FIOS traffic without any issues.
WARNING: technical content ahead! There’s also a tonne of config files, which make this page look longer than it really is, but hopefully they’ll help other people who want to do similar work.
For many years I’ve been using variations of the Linksys WRT54G. I first switched to this router when freeware ROMs became available; I’ve used DD-WRT, Tomato, OpenWRT and others.
Part of any good backup strategy is to ensure a copy of your backup is stored in a secondary location, so that if there is a major outage (datacenter failure, office burns down, whatever) there is a copy of your data stored elsewhere. After all, what use is a backup if it gets destroyed at the same time as the original?
A large enterprise may do cross-datacenter backups, or stream them to a “bunker”; smaller business may physically transfer media to a storage location (in my first job mumble years ago, the finance director would take the weekly full-backup tapes to her house so we had at most 1 week of data loss).
Have you tested your backups recently? I’m sure you’ve heard that phrase before. And then thought “Hmm, yeah, I should do that”. If you remember, you’ll stick a tape in the drive and fire up your software, and restore a dozen files to a temporary location. Success! You’ve proven your backups can be recovered.
Or have you?
What would you do if your server was destroyed? Do you require specialist software to recover that backup?
In previous posts I pointed out why TLS is important, how to configure Apache to score an A+ and how to tune HTTP headers. All this is dependent on getting an SSL cert.
Some jargon explained Before we delve into a “how to”, some basic jargon should be explained:
SSL/TLS TLS (“Transport Layer Security”) is the successor to SSL (“Secure Socket Layer”). SSL was created by Netscape in the mid 90s (I remember installing “Netscape Commerce Server” in 1996).
Modern web browsers have a number of settings to help protect your site from attack. You turn them on by use of various header lines in your HTTP responses. Now when I first read about them I thought they were not useful; a basic rule for client-server computing is “don’t trust the client”. An attacker can bypass any rules you try to enforce client side.
But then I read what they do and realised that they are primary to help protect the client and, as a consequence, protect your site from being hijacked.
(Side note: in this post I’m going to use TLS and SSL interchangably. To all intents and purposes you can think of TLS as the successor to SSL; most libraries do both).
You can think of security as a stack. Each layer of the stack needs to be secure in order for the whole thing to be secure. Or, alternatively, you can think of it as a chain; the whole thing is only as strong as the weakest link.
In my previous post I wrote about some automation of static and dynamic scanning as part of the software delivery pipeline.
However nothing stays the same; we find new vulnerabilities or configurations are broken or stuff previously considered secure is now weak (64bit ciphers can be broken, for example).
So as well as doing your scans during the development cycle we also need to do repeated scans of deployed infrastructure; espcially if it’s externally facing (but internal facing services may still be at risk from the tens of thousands of desktops in your organisation).
In many organisations an automated scan of an application is done before it’s allowed to “go live”, especially if the app is external facing.
There are typically two types of scan:
Static Scan Dynamic Scan Static scan A static scan is commonly a source code scan. It will analyse code for many common failure modes. If you’re writing C code then it’ll flag on common buffer overflow patterns.
Building a secure web application has multiple layers to it. In previous posts I’ve spoken about some design concepts relating to building a secure container for your app, and hinted that some of the same concepts could be used for building VMs as well.
You also need to build secure apps. OWASP is a great way to help get started on that. I’m not going to spend much time on this blog talking about application builds beyond some generics because I’m not really a webdev.
A decade or so back, VistaPrint did a “free card” offer as long as you used one of their templates. So I got a bunch of cards printed
Over the years I’ve probably given out…5 of them? Heh.
VistaPrint no longer seem to do freebies, but I decided to refresh my image.
The cost was $8 for 150 cards or $9 for 250, so I went for 250. And then after checkout they said for $1.
My old site was nicely hand crafted HTML. Each bit loving created. It worked… but it did smell a little 90’s. Which doesn’t surprise me; the last time I did any web development was the 90s!
So I thought I’d try something a little more modern.
Unfortunately most CMS systems (eg WordPress, Joomla, Drupal) appear to want to use a database of some form. The content is displayed dynamically based on the user request and the database content.