Planet Haskell

June 9, 2008

Yes, the owner of this blog is requesting to get added to Planet Haskell – is this authentication enough? ;-)

Recommended Reading

May 25, 2008

I’ve been wanting to advance my education through a PhD program for a while now. As such, I’ve been reading a reasonable number of papers mostly in the field programming languages (strong bias toward SPJs work), but also in Ad-hoc networks (strong bias toward Baruch Awerbuch papers). I can’t say I’m too selective on what I like, but here are some of my likes anyway. Enjoy and feel free to post your papers or any discussion of the ideas presented in these papers.

Within The World of Languages

Simon Peyton-Jones, “Call-pattern Specialisation for Haskell Programs” *

Simon Marlow et al “Faster Laziness Using Dynamic Pointer Tagging

Simon Peyton-Jones et al, “Playing by the Rules: Rewriting as a practical optimisation technique in GHC” *

Tom Schrijvers et al “Type Checking with Open Type Functions” *

Duncan Coutts et al “Stream Fusion: From Lists to Streams to Nothing at All” *

Neil Mitchell and Colin RuncimanA Supercompiler for Core Haskell” * (Looks great, but I want to try it on my own programs to see if it will benefit me as much as I hope)

Peng Li et al “Lightweight Concurrency Primitives for GHC” * (A simpler to understand RTS would be great, but I fear for the performance)

Robert Ennals et al “Task Partitioning for Multi-Core Network Processors” *

Peng Li and Steve Zdancewic “Encoding Information Flow in Haskell” (perhaps not sound, but certainly useful)

Tim Harris and Simon Peyton Jones “Transactional Memory with Data Invariants” * (some functions aren’t available in the standard GHC/STM load, but the paper is fun anyway)

Dana Xu et al “Static Contract Checking for Haskell” * (I don’t know about you, but I almost can’t wait to see the work embodied in a GHC release!)

Every name you know “Roadmap for Enhanced Languages and Methods to Aid Verification

Ad hoc / Distributed Systems / Protocols

Baruch Awerbuch et al “Towards Scalable and Robust Overlay Networks” (See the entire line of papers, including “A Denial-of-Service Resistant DHT” and “Towards a Scalable and Robust DHT”)

Rudolf Ahlswede et al “Network Information Flow

Sachin Katti et al “Network Coding Made Practical” * (Now why isn’t this an option when I click network manager -> ad hoc network in Fedora 9?)

Joshua Guttman “Authentication Tests and the Structure of Bundles” *

Baruch Awerbuch et al “Provably Competitive Adaptive Routing” *

Baruch Awerbuch et al “Medium Time Metric” (This one is just begging for someone to write a paper “The opportunity cost metric”, don’t you think?)

* Easy read (even if it isn’t your field) / very enjoyable

I posted this as a page by accident – so here it is as a blog entry and I’ll delete the page some day.

My previous post discussed how inet_ntoa uses a static buffer which can cause a race condition. Unlike in ‘C’, this is particularly likely to cause a race in Haskell programs due to the quick, easy, and cheap threads using forkIO that (potentially) share a single OS thread. Two bright spots were that inet_ntoa was marked as IO and that the result is usually unimportant.

Another FFI binding, nano-md5, has a similar race condition but is much more series (not marked as IO and the result is a digest).

An even-handed note: iirc, nano-md5 remains hackage mostly as an FFI example – not that this is advertised in the nano-md5 description. “Real” users are told to look at hsOpenSSL and hopenssl – a cursory glance at the code suggests they don’t have this bug. Also, the other bindings don’t require O(n) space – so they are certainly worth switching to.

The nano-md5 line:

digest <- c_md5 ptr (fromIntegral n) nullPtr

is the culprit. It uses ‘nullPtr’ and according to the OpenSSL manual “If md is NULL, the digest is placed in a static array”.

Test code that confirms the bug can be found here – this will run three hash operations in parallel and eventually one result will be the correct first bits with ending bits from one of the other digests. The developer has already fixed the issue for versions 0.1.2+. I’ll wrap this post up with a request for library developers to please work to avoid use of static buffers – they have no place in this forkIO happy playland I call Haskell.

Racing inet_ntoa

April 24, 2008

Just because I am feeling lazy wrt any real task, I decided to post about the sillyness that is inet_ntoa. Yes, this is ancient/known stuff to rehash, but you can hit the browser back button at any time.

As most of you probably know, the function inet_ntoa converts an IPv4 address to ascii, storing the result in a static buffer. It is this last part that periodically causes people fun when they forget. This mutable memory issue is revealed easily enough in goofed up ‘C’ statements such as:

  struct in_addr a,b;
  inet_aton("", &a);
  inet_aton("", &b);
  printf("addr a: %s\taddr b: %s\n", inet_ntoa(a), inet_ntoa(b));

which returns “addr a: addr b:″. Sometimes more complex systems have a race condition (ex: exception handlers calling inet_ntoa), but it isn’t a larger issue in multi-threaded C programs thanks to thread local storage…

unless you cram many logical threads into a single OS thread like in Haskell. Zao in #haskell asked why inet_ntoa was of type IO (meaning, it isn’t a pure / deterministic function) and I correctly guessed it was a wrapper for the ‘C’ call.

Not to rip at any of the libraries folk, who made a faithful foreign function interface for the sockets/networking functions, but – this was a bad idea. Foremost, the use of IO means this can’t be called from any deterministic function even though the desired operation of converting an address to a string IS deterministic. Secondly, some Haskell programmers (myself included) use Haskells threads liberally (perhaps another, positive, blog post on that). So if someone is being brain-dead then they are going to have a bug – likely non-fatal and obvious due to how string representation of addresses are used.

And if you desire to see the race, I have some code… hope it runs… yep:

import Network.Socket (inet_ntoa)
import Control.Concurrent (forkIO, threadDelay)
import Control.Monad (when, forever)

main = do
    let zero = (0,"")
        one  = (1,"")
        two  = (2,"")
        assert = confirm "assert"
        race = \x -> forever (confirm "FAILURE: " x)
    assert zero
    assert one
    assert two
    forkIO $ race zero
    forkIO $ race one
    forkIO $ race two
    threadDelay maxBound

test n s = forever $ confirm "" (n,s)

confirm prefix (n,str) = do
    s <- inet_ntoa n
    when (s /= str) (error (prefix ++ s ++ " does not equal " ++ str))

Yes, I know this non-deterministic behavior is being screamed by that ‘IO’ type.
Yes, I know I should write a Haskellized network library.

I’ve always thought the ideal work would include a healthy dose of open source. To this end, I try to make my time wasting activities (slashdot, proggit) worth while by noting the various business models and their success. This is an informal brain dump with extra facts pulled in from Wikipedia – as such you should confirm any information before long term storage in your brain.


For idealistic reasons, the preferred model would be like RedHats. You compose, improve and build open source projects and products, providing them for free and selling the service (consulting, tailored development, support).

The main problem here is that, as a theoretical small business owner, I want to sell products and not services. In addition to the the discreet nature of products (as opposed to continuing duties of services), the constructive feel of a product oriented business seems nice in contrast to the droning of a service oriented business. Here is a question: is the concept of open source at odds with product driven revenue?


The Xen method seemed to be making an open source component and selling closed source support tools. This appeared aimed at preventing RedHat et al. from ‘stealing’ the GPL code and owing nothing to XenSource. I say “seemed” because the true aim could have been to get bought out – as XenSource was on 22Oct2007 by Citrix for $500M.


A well known (and often successful) method of dual licensing was employed by TrollTech. TrollTech owned the source to QT which they monotized by selling proprietary licenses in addition to offering the source code under GPL. Like Citrix, on 28Jan2008 Nokia offered (and TrollTech accepted) a $163M buyout.


Like TrollTech, MySQL offered both support and dual licensing. I notice this is referred to as a ‘second generation’ open source company on Wikipedia – not sure how common this term is, but I take issue with the idea that the distinguishing factor between second and first generation open source companies is their ability/willingness to sell closed source licenses. MySQL was purchased by SUN on 16Feb08 for $1B… are we seeing a trend yet?

The Jabberwolk

March 21, 2008

Welcome to the Jabberwolk. This is my blog recently moved from Here I will try to stay on the topics of Haskell, Xen, and Linux but anything technical is game and I’ll warn you that politics might appear (but I’ll try to keep that down).

Edit: My old blog is here.


Get every new post delivered to your Inbox.