RDing TEMPer Gold USB thermometer on OpenBSD

A few weeks ago, I ordered a RDing TEMPer Gold USB thermometer from PCsensor, a cute little device allowing to easily perform room temperature measurements.

As mentioned on the package, an USB cable should be used unless the goal is to measure chasis temperature.

TEMPer USB thermometer

On OpenBSD, the device is fully supported by the ugold(4) driver :

uhidev0 at uhub0 port 4 configuration 1 interface 0 "RDing TEMPerV1.4" rev 2.00/0.01 addr 3
uhidev0: iclass 3/1, 1 report id
ukbd0 at uhidev0 reportid 1: 8 variable keys, 5 key codes
wskbd1 at ukbd0 mux 1
wskbd1: connecting to wsdisplay0
uhidev1 at uhub0 port 4 configuration 1 interface 1 "RDing TEMPerV1.4" rev 2.00/0.01 addr 3
uhidev1: iclass 3/1
ugold0 at uhidev1
ugold0: 1 sensor type ds75/12bit (temperature)

Sensor values can be retrieved via the sysctl interface :

sysctl hw.sensors.ugold0
hw.sensors.ugold0.temp0=26.75 degC (inner)    

Alternatively, the -n switch can be used to only display the field value :

sysctl -n hw.sensors.ugold0
26.75 degC (inner)

GPU-accelerated video playback with NetBSD on the Raspberry Pi

NetBSD 7 gained support for hardware acceleration on the Raspberry Pi last January, and OMXPlayer was ubsequently imported into Pkgsrc. This combination allows seamless video playback directly in console.

For testing this setup, I used Jun Ebihara’s prebuilt NetBSD RPi image and packages.

Installing OMXPlayer using binary packages :

pkg_add omxplayer

Playing a video after blanking the screen :

omxplayer -b captain-comic.avi

This works unsurprisingly well and the player is quite pleasant to use.

RISC OS on the Raspberry Pi

Last week-end, I was finally able to dedicate some free time to play a little bit with the Raspberry Pi again, so I decided to plug it on my TV and try RISC OS Open using the prebuilt RISC OS Pi (RC14) SD card image.

In fact, I already had a brief encounter with RISC OS running on Acorn hardware (most likely a Risc PC) a while ago at a French demoparty in the late nineties. I’m not sure how popular those machines were in the UK, but in France, it was as exotic as it can get.

RISC OS in full glory on a 42 inches Panasonic TV in 1080p

Here is a screenshot showing the desktop running a few applications : BBC Basic, StrongEd text editor, and the NetSurf Web browser pointed at my ASCii and ANSi Gallery :

RISC OS running BBC Basic, StrongEd and NetSurf

This capture was taken using Snapper and converted from sprite to PNG using ConvImgs.

The case for Nginx in front of application servers

As a rule of thumb, an application server should never face the Internet directly, unless of course Nginx (or OpenResty) is being used as such. This is not only for performance reasons, although this is not much of a concern anymore with modern runtimes such as Go or Node, but mostly for flexibility and security reasons.

Here are some key points to consider :

  • At this point, Nginx is a proven and battle-tested HTTP server
  • This allows keeping the application as simple as possible : Nginx will handle logging, compression, SSL, and so on
  • In case the application server goes down, Nginx will still serve a 50x page so visitors know that something is wrong
  • Nginx has built-in load-balancing features, it also allows running several application servers on the same IP address
  • Nginx has built-in caching features (with on-disk persistence)
  • Nginx has rich rate-limiting features, which are especially useful for APIs
  • Nginx helps protecting against some DoS attacks (such as low-bandwidth Application Layer attacks)

Lastly, one aspect which tend to be forgotten these days is the importance of server logs. While in some cases it might be an accepable solution to use Google Analytics or Piwik, for measuring APIs traffic however, there is no better option. For a modern real-time log analyzer, I heartily recommend GoAccess.

Benchmarking HTTP servers

ApacheBench

ApacheBench (ab) is a tool bundled with the Apache HTTP server which can be used to benchmark any kind of HTTP server.

To benchmark localhost (100000 requests with 100 concurrent connections) :

ab -c100 -n10000 http://127.0.0.1/

Sample output :

Server Software:        nginx/1.6.2
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /
Document Length:        612 bytes

Concurrency Level:      100
Time taken for tests:   6.965 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      84400000 bytes
HTML transferred:       61200000 bytes
Requests per second:    14357.89 [#/sec] (mean)
Time per request:       6.965 [ms] (mean)
Time per request:       0.070 [ms] (mean, across all concurrent requests)
Transfer rate:          11834.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       3
Processing:     2    7   1.0      7      13
Waiting:        0    4   1.9      4      12
Total:          2    7   1.0      7      13

Percentage of the requests served within a certain time (ms)
  50%      7
  66%      7
  75%      8
  80%      8
  90%      8
  95%      8
  98%      9
  99%     11
 100%     13 (longest request)

Beware though, ab is single-threaded and this can be a bottleneck when benchmarking high performance HTTP servers, especially when testing on localhost.

Siege

A better alternative is siege, which is multi-threaded and has some interesting features such as the ability to use multiple different URLs during tests.

To benchmark localhost (100000 requests with 100 concurrent connections) :

siege -b -c100 -r100 http://127.0.0.1/

The -b option allows running throughput benchmarking tests without delay between simulated users.

Sample output :

** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.
The server is now under siege...

Transactions:		       10000 hits
Availability:		      100.00 %
Elapsed time:		        1.02 secs
Data transferred:	        5.84 MB
Response time:		        0.01 secs
Transaction rate:	     9770.99 trans/sec
Throughput:		        5.70 MB/sec
Concurrency:		       52.74
Successful transactions:       10000
Failed transactions:	           0
Longest transaction:	        0.02
Shortest transaction:	        0.00

wrk

Wrk is a promising HTTP benchmarking tool with a modern architecture, which is also scriptable with Lua.

To benchmark localhost (for 10 seconds with 100 concurrent connections) :

wrk -c100 http://127.0.0.1/

Sample output :

Running 10s test @ http://127.0.0.1/
  2 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.02ms  794.02us  12.57ms   90.71%
    Req/Sec    13.27k     2.30k   30.50k    77.92%
  254557 requests in 10.00s, 206.10MB read
Requests/sec:  25455.48
Transfer/sec:     20.61MB