RISC OS on the Raspberry Pi

Last week-end, I was finally able to dedicate some free time to play a little bit with the Raspberry Pi again, so I decided to plug it on my TV and try RISC OS Open using the prebuilt RISC OS Pi (RC14) SD card image.

In fact, I already had a brief encounter with RISC OS running on Acorn hardware (most likely a Risc PC) a while ago at a French demoparty in the late nineties. I’m not sure how popular those machines were in the UK, but in France, it was as exotic as it can get.

RISC OS in full glory on a 42 inches Panasonic TV in 1080p

Here is a screenshot showing the desktop running a few applications : BBC Basic, StrongEd text editor, and the NetSurf Web browser pointed at my ASCii and ANSi Gallery :

RISC OS running BBC Basic, StrongEd and NetSurf

This capture was taken using Snapper and converted from sprite to PNG using ConvImgs.

The case for Nginx in front of application servers

As a rule of thumb, an application server should never face the Internet directly, unless of course Nginx (or OpenResty) is being used as such. This is not only for performance reasons, although this is not much of a concern anymore with modern runtimes such as Go or Node, but mostly for flexibility and security reasons.

Here are some key points to consider :

  • At this point, Nginx is a proven and battle-tested HTTP server
  • This allows keeping the application as simple as possible : Nginx will handle logging, compression, SSL, and so on
  • In case the application server goes down, Nginx will still serve a 50x page so visitors know that something is wrong
  • Nginx has built-in load-balancing features, it also allows running several application servers on the same IP address
  • Nginx has built-in caching features (with on-disk persistence)
  • Nginx has rich rate-limiting features, which are especially useful for APIs
  • Nginx helps protecting against some DoS attacks (such as low-bandwidth Application Layer attacks)

Lastly, one aspect which tend to be forgotten these days is the importance of server logs. While in some cases it might be an accepable solution to use Google Analytics or Piwik, for measuring APIs traffic however, there is no better option. For a modern real-time log analyzer, I heartily recommend GoAccess.

Benchmarking HTTP servers


ApacheBench (ab) is a tool bundled with the Apache HTTP server which can be used to benchmark any kind of HTTP server.

To benchmark localhost (100000 requests with 100 concurrent connections) :

ab -c100 -n10000

Sample output :

Server Software:        nginx/1.6.2
Server Hostname:
Server Port:            80

Document Path:          /
Document Length:        612 bytes

Concurrency Level:      100
Time taken for tests:   6.965 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      84400000 bytes
HTML transferred:       61200000 bytes
Requests per second:    14357.89 [#/sec] (mean)
Time per request:       6.965 [ms] (mean)
Time per request:       0.070 [ms] (mean, across all concurrent requests)
Transfer rate:          11834.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       3
Processing:     2    7   1.0      7      13
Waiting:        0    4   1.9      4      12
Total:          2    7   1.0      7      13

Percentage of the requests served within a certain time (ms)
  50%      7
  66%      7
  75%      8
  80%      8
  90%      8
  95%      8
  98%      9
  99%     11
 100%     13 (longest request)

Beware though, ab is single-threaded and this can be a bottleneck when benchmarking high performance HTTP servers, especially when testing on localhost.


A better alternative is siege, which is multi-threaded and has some interesting features such as the ability to use multiple different URLs during tests.

To benchmark localhost (100000 requests with 100 concurrent connections) :

siege -b -c100 -r100

The -b option allows running throughput benchmarking tests without delay between simulated users.

Sample output :

** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.
The server is now under siege...

Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		        1.02 secs
Data transferred:	        5.84 MB
Response time:		        0.01 secs
Transaction rate:	     9770.99 trans/sec
Throughput:		        5.70 MB/sec
Concurrency:		       52.74
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.02
Shortest transaction:	        0.00


Wrk is a promising HTTP benchmarking tool with a modern architecture, which is also scriptable with Lua.

To benchmark localhost (for 10 seconds with 100 concurrent connections) :

wrk -c100

Sample output :

Running 10s test @
  2 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.02ms  794.02us  12.57ms   90.71%
    Req/Sec    13.27k     2.30k   30.50k    77.92%
  254557 requests in 10.00s, 206.10MB read
Requests/sec:  25455.48
Transfer/sec:     20.61MB

Using pkgsrc on Mac OS X

Given NetBSD focus on portability, it’s only logical that pkgsrc is also available on systems other than NetBSD, including Darwin (Mac OS X). Here are some notes showing to bootstrap pkgsrc in unprivileged mode, which means that everything can easily be installed in the user home directory.

Before starting, we need to install Xcode Command Line Tools to get a working compiler.

Fetching and extracing latest pkgsrc stable release

This will create a ~pkgsrc directory :

wget http://ftp.netbsd.org/pub/pkgsrc/stable/pkgsrc.txz
tar xfz pkgsrc.txz

Bootstrapping pkgsrc

Launching the bootstrap script and setting the ABI to 64-bit :

cd pkgsrc/bootstrap
./bootstrap --abi=64 --compiler=clang --unprivileged

This will create and start populating the ~pkg directory where all built packages will be installed.

For a complete list of available options :

./bootstrap -h
===> bootstrap command: ./bootstrap -h
===> bootstrap started: Sat Sep 27 21:59:08 CEST 2014
Usage: ./bootstrap
    [ --abi [32|64] ]
    [ --binary-kit <tarball> ]
    [ --binary-macpkg <pkg> ]
    [ --compiler <compiler> ]
    [ --full ]
    [ --gzip-binary-kit <tarball> ]
    [ --help ]
    [ --mk-fragment <mk.conf> ]
    [ --pkgdbdir <pkgdbdir> ]
    [ --pkginfodir <pkginfodir> ]
    [ --pkgmandir <pkgmandir> ]
    [ --prefer-pkgsrc <list|yes|no> ]
    [ --prefix <prefix> ]
    [ --preserve-path ]
    [ --quiet ]
    [ --sysconfdir <sysconfdir> ]
    [ --unprivileged | --ignore-user-check ]
    [ --varbase <varbase> ]
    [ --workdir <workdir> ]

Adding ~pkg to the path :

export PATH=$PATH:~/pkg/bin:~/pkg/sbin

Fetching security vulnerabilities information :

pkg_admin fetch-pkg-vulnerabilities

Adding some acceptable licenses to our pkgsrc configuration :

echo "ACCEPTABLE_LICENSES+= vim-license" >> ~/pkg/etc/mk.conf

Building packages

Here is how to build a package and clean the working directory and all dependencies :

cd ~/pkgsrc/category/package
bmake install clean clean-depends

Keeping pkgsrc up-to-date

First, we need to build CVS :

cd ~/pkgsrc/devel/scmcvs
bmake install clean clean-depends

We can then update pkgsrc using the following command :

cd ~/pkgsrc && cvs update -dP

Checking for security vulnerabilities in packages :

pkg_admin audit

Installing CA certificates

cd ~/pkgsrc/security/mozilla-rootcerts
bmake install clean clean-depends
mozilla-rootcerts install

For more details, please read the following post : Installing CA certificates on NetBSD.

Using binary packages

For those who prefer using binary packages, please check the Joyent packages repository and Save OS X.

Final words

After running Fink in 2009 on my Mac mini, and then Homebrew since late 2011 on my MacBook Pro, it’s nice to explore alternatives especially since they are not mutually exclusive. It’s in fact a nice idea to combine pkgsrc and Homebrew to get the best of both worlds and access to even more packages.

Lastly, for a comprehensive searchable database of packages, please check the excellent pkgsrc.se.

Fingerprinting DNS servers authoritative for the top 1 million domains

As an experiment, I’ve been using fpdns (version 0.10.0 on FreeBSD/amd64) to fingerprint DNS servers authoritative for the top 1 million domains (according to Alexa).

At first, I had plans to use adnshost to resolve name servers first and then feed the resolved list to fpdns, in order to speed up things and avoid fingerprinting the same host several times. Unfortunately, it seems adnshost doesn’t work that well on large batches and I experienced numerous timeouts and crashes.

Extracting a list of domains from the CSV file

wget http://s3.amazonaws.com/alexa-static/top-1m.csv.zip
unzip top-1m.csv.zip
cut -d "," -f 2 top-1m.csv > domains.txt

As the fingerprinting process will require resolving name servers for each domain in the list, I will be using a local Unbound instance in order to avoid hitting my ISP name servers too aggressively.

Configuring the system to use Unbound as local resolver

After adding our local resolver to resolv.conf :

echo "nameserver" > /etc/resolv.conf

We can verify that we are indeed using our Unbound instance :

dig version.bind CH txt +short
"unbound 1.4.22"

Fingerprinting using fpdns

Here is a list of fpdns options we will be using :

-D         (check all authoritative servers for Domain)
-F nchild  (maximum forked processes) [10]

Starting fpdns with 128 child processes :

fpdns -D -F 128 - < domains.txt > fingerprints.txt

Processing output and aggregating results

First, we aggregate results by IP addresses in order to avoid counting results several times (a name server can be authoritative for several different domains) :

cut -d ',' -f 2 < fingerprints.txt | sort | uniq > results.txt

We then aggregate by software and count occurences :

awk -F'[)][:] ' '{print $2}' < results.txt | sort | uniq -c

I used awk here instead of cut as the latest doesn’t allow using more than one character as a delimiter.

Here are the results :

     6 sheerdns  [Old Rules]
     2 3Com Office Connect Remote  [Old Rules]
    57 DJ Bernstein TinyDNS 1.04 [Old Rules]
  5199 DJ Bernstein TinyDNS 1.05 [Old Rules]
    13 Dan Kaminsky nomde DNS tunnel  [Old Rules]
     3 Fasthosts Envisage DNS server  [Old Rules]
     2 Meilof Veeningen Posadis  [Old Rules]
     2 Men & Mice QuickDNS for MacOS Classic  [Old Rules]
     4 Michael Tokarev rbldnsd  [Old Rules]
    29 Microsoft ?  [Old Rules]
   387 Microsoft Windows DNS 2000 [New Rules]
    50 Microsoft Windows DNS 2000 [Old Rules]
    88 Microsoft Windows DNS 2003 R2 [New Rules]
  6373 Microsoft Windows DNS 2003 [New Rules]
    87 Microsoft Windows DNS 2003 [Old Rules]
  1278 Microsoft Windows DNS 2008 R2 [New Rules]
    25 Microsoft Windows DNS 2008 [New Rules]
     2 Microsoft Windows DNS NT4 [Old Rules]
    12 NLnetLabs NSD 1.0 alpha [Old Rules]
 12046 NLnetLabs NSD 3.1.0 -- 3.2.8 [New Rules]
     6 NLnetLabs Unbound 1.4.10 -- 1.4.12 [New Rules]
220751 No match found
    25 Simon Kelley dnsmasq  [Old Rules]
    18 Sourceforge JDNSS  [Old Rules]
     1 TZO Tzolkin DNS  [Old Rules]
  4863 Unlogic Eagle DNS 1.0 -- 1.0.1 [New Rules]
    88 Unlogic Eagle DNS 1.1.1 [New Rules]
    18 ValidStream ValidDNS  [Old Rules]
     1 WinGate Wingate DNS  [Old Rules]
     1 XBILL jnamed (dnsjava)  [Old Rules]
    40 Yutaka Sato DeleGate DNS  [Old Rules]
    13 javaprofessionals javadns/jdns  [Old Rules]

As often with these kind of experiments, results aren’t really exploitable to produce reliable statistics : apparently, it seems that BIND has totally disappeared from the Internet ;)

However, I believe the process is still useful and demonstrates how easy it can be to quickly produce DNS surveys using simple UNIX tools.