Decommissioning a free public API

This is a sequel to my “Adventures in running a free public API“ post which you should read for some background information about the reasons behind terminating the free public instance of Telize.

I stopped the service as planned, on November 15th, after a two weeks notice. I should probably mention beforehand that things were more complicated that they could have been due to poor initial planning on my part. When I launched the public instance, it was mean as a way to demonstrate the open source project, and to be honest I didn’t really anticipate the amount of traffic it would receive. So basically, I didn’t bother configuring a subdomain to host the endpoints, and hosted them on the main domain. If I had done so, I could simply have removed DNS records for the subdomain, basically sinkholing it and be done with it. As the HTML pages needed to stay accessible, my only option was thus to block the endpoints by returning an HTTP error code, and so configured Nginx to return a 403 Forbidden HTTP status code. This should have been it, right?

In an ideal world, maybe. In practice, it triggered a huge amount of retries on failed requests from poorly coded scripts, effectively creating a DDoS cannon causing CPU and I/O (both network and disk, think of logging) usage to skyrocket. Now on a busy day, Telize received more than 130M requests, coming from 10M unique IP addresses as one can see in this report generated by Logswan. We were now talking about almost 800,000 requests per minute, as shown on this other report generated from sampling one minute of traffic. At this point I had to switch to using Nginx’s 444 No Response HTTP status code and just close the connection in order to save bandwidth. On the day following the API termination, I received more than 1TB of incoming requests, which represent a huge amount of traffic and of course, bandwidth costs money. I can perfectly handle this kind of traffic from a technical standpoint, but from a financial one, there is simply no way I can sustain it, and it doesn’t make sense for me to keep paying for data transfer overcharges. So I had to move the site in emergency to a static hosting platform (GitHub Pages in this case) which then returned a 404 error for all three endpoints : ‘ip’, ‘jsonip’, and ‘geoip’. The last one being by far the most used one, it probably triggered some alert mechanism and caused the appearance of a Varnish rule simply closing the connection on this endpoint, returning an empty response. The situation was now under control, and I was finally able to relax and get some much needed rest; this was Sunday evening, and I decided to call it a day and get some sleep. I’m extremely grateful to GitHub for saving the day, and in fact decided to subscribe to a paid plan as a way to show my appreciation. So, problem solved?

Not so fast! The next morning (on Monday), I was in for a surprise as my mailbox started to fill up with inquiries regarding Telize termination. Basically, some code was crashing at random locations because people relying on a free service for their businesses were not careful enough to implement correct error checking. One of those mails came from a set-top box manufacturer, stating that thousands of customers were unable to watch TV because their boxes crashed when Telize didn’t return any data, and demanding that I return an empty JSON object for a two weeks period. I answered as fast as I could, explaining the situation and that I had no control over this as the endpoint had a Varnish rule closing the connection without sending traffic. Realistically, should I have chosen to fulfill the request, I would have had to go back to handling the traffic myself on my own servers. This didn’t end there, as the person wouldn’t take no for an answer, and came up with the brilliant idea to ask me to redirect the endpoint to a server they would host themselves in order not to just serve empty data this time, but simply restoring service entirely. I had to re-read the mail a couple of times to ensure my brain was not tricking me. On what ground would I allow a third party to serve content on one of my domains? How on earth did I dare not to act on this unreasonable request on a timely manner, warranting further annoyance both by mail and on Twitter, lasting for two days?

So basically, what’s the morale here? Some people just expect you to invest your own time and money to solve their problem, and for you to do it straight away, when it’s convenient for them, and of course without being compensated for it. No matter that they used a free service to begin with, without giving any notice beforehand, or that you have a daytime job and other involvements. I would not have it so.

Hopefully, this whole story will at least teach some people that relying on a free service you have no control over means adding a single point of failure on a volunteer basis. It’s perfectly fine for a side project, but for a business? I would think at least twice before taking this kind of decision.

Adventures in running a free public API

The first iteration of Telize launched on April 20th, 2013. It started as a simple endpoint returning the client IP address in plain text, directly from Nginx, using the third party HTTP Echo module. As there was no application server being involved, latency was very good and people started using it. The current iteration launched on August 21th 2013, and introduced a REST API built on Nginx and Lua allowing to get a visitor IP address and to query location information from any IP address.

The software itself is the result of countless hours of coding and testing, and has been open source since the beginning. I invested a lot of time and money running and managing the instances so everyone could enjoy it for free, and I’ve been mostly happy doing so for the last two and a half years.

Of course there have been some abuses, such as idiots scanning whole IP ranges (either as an attempt to harm the service or to rebuild a freely available database by iterating on all IPv4 address space), or some companies leaching of a free service and sending some substantial amount of traffic. After all, the API was unrestricted and not rate limited, and I still think to this day that it was easier to add capacity as Telize grew instead of implementing ratios and regulate the service. So for the most part, I didn’t mind and I’m happy people put the API to good use.

However, things changed when I discovered Telize was being used by malware and ransomware. Quite frankly, this is something I just can’t tolerate. On November 5th I announced the decision to close the public instance with a 10 days notice, effective November 15th. I simply do not have time, energy, nor resources to engage in fighting abuses.

So where do we go from here? Well, Telize is open source, and can be downloaded as tarball releases or on GitHub. The project will keep being maintained and anyone can install and run their own instance, there is no “vendor” lock-in. For those for which the previous option is not possible, there will be a paid version to ease transition. This is the only way to ensure that the service can’t be used for nefarious purposes without a trace.

In retrospective, it has been a positive adventure and a nice surprise to see Telize grow steadily to serve more than 130M daily queries and give birth to new ideas. In fact, my very own Logswan was born out of the necessity of processing more than 20GB of logs daily.

Distributing files via DNS

After publishing my Storing ASCII art in the DNS post last year, I’ve been thinking about using the same method to distribute files.

While transmitting data over DNS is not a new concept, I believe it has never been done using NAPTR records. The well known iodine DNS tunnel uses NULL resource records to transmit binary data, but I wanted something which can be used with standard tools like dig. On this topic, I’ve recently read the Internet Draft defining the SINK resource record and it seems like it could be used and abused for some fun hacks.

Back to today’s topic though. The idea behind this experiment is to encode a given file in base64, and then create a NAPTR record for each line of the ouput.

I used the following shell script to produce the zone file :

base64 rrda-1.01.tar.gz -b 64 | while read line;
	echo $1 'NAPTR' $counter '10 "" "'$line'" "" .'
	let "counter++"

Please note that this snippet was created and tested on Mac OS X. On Linux, the -b option needs to be replaced by -w to wrap encoded lines.

We can query the zone to check the content of NAPTR records :

dig NAPTR +short +tcp

Once we get the NAPTR records content, we can strip the leading and trailing data to get our lines back in the original order, and decode the base64 data to recreate the file.

And here is a one-liner to get the original file back and pipe it through tar :

dig NAPTR +short +tcp | sort | sed -e 's/[0-9]* 10 "" "//g;s/" "" .//g' | base64 --decode | tar xvfz -

For extra points, the zone used to distribute files can be signed with DNSSEC, in order to create a secure file distribution channel. This is left as an exercise to the reader.

RDing TEMPer Gold USB thermometer on OpenBSD

A few weeks ago, I ordered a RDing TEMPer Gold USB thermometer from PCsensor, a cute little device allowing to easily perform room temperature measurements.

As mentioned on the package, an USB cable should be used unless the goal is to measure chasis temperature.

TEMPer USB thermometer

On OpenBSD, the device is fully supported by the ugold(4) driver :

uhidev0 at uhub0 port 4 configuration 1 interface 0 "RDing TEMPerV1.4" rev 2.00/0.01 addr 3
uhidev0: iclass 3/1, 1 report id
ukbd0 at uhidev0 reportid 1: 8 variable keys, 5 key codes
wskbd1 at ukbd0 mux 1
wskbd1: connecting to wsdisplay0
uhidev1 at uhub0 port 4 configuration 1 interface 1 "RDing TEMPerV1.4" rev 2.00/0.01 addr 3
uhidev1: iclass 3/1
ugold0 at uhidev1
ugold0: 1 sensor type ds75/12bit (temperature)

Sensor values can be retrieved via the sysctl interface :

sysctl hw.sensors.ugold0
hw.sensors.ugold0.temp0=26.75 degC (inner)    

Alternatively, the -n switch can be used to only display the field value :

sysctl -n hw.sensors.ugold0
26.75 degC (inner)

GPU-accelerated video playback with NetBSD on the Raspberry Pi

NetBSD 7 gained support for hardware acceleration on the Raspberry Pi last January, and OMXPlayer was subequently imported into Pkgsrc. This combination allows seamless video playback directly in console.

For testing this setup, I used Jun Ebihara’s prebuilt NetBSD RPi image and packages.

Installing OMXPlayer using binary packages :

pkg_add omxplayer

Playing a video after blanking the screen :

omxplayer -b captain-comic.avi

This works unsurprisingly well and the player is quite pleasant to use.