Friday, October 10, 2008

Great Programming Quotes

I found a great post on stackoverflow.com, the new joint venture between Joel Spolsky and Jeff Atwood. Most of the stuff there hasn't been very interesting to me (I don't work with .Net or Java, and a lot of the other questions seem fairly basic). Recently someone asked people to list their favorite programming quotes. The results are great; here are some of my favorites:
Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.
Brian Kernighan: Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
Tom Cargill The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time.
Isaac Asimov The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' but 'That's funny...'
Alan Kay I invented the term Object-Oriented, and I can tell you I did not have C++ in mind.
P. J. Plauger My definition of an expert in any field is a person who knows enough about what's really going on to be scared.
Edsger Dijkstra If debugging is the process of removing software bugs, then programming must be the process of putting them in.
Edsger Dijkstra Computer Science is no more about computers than astronomy is about telescopes.
Antoine de Saint Exupéry Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Donald Knuth We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Rich Cook Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.
Jamie Zawinski Linux is only free if your time has no value.
Brooks' The Mythical Man-Month Plan to throw one away; you will anyway.
R. Buckminster Fuller When I am working on a problem I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong.
John Carmack Fight code entropy.
Douglas Adams I (...) am rarely happier than when spending an entire day programming my computer to perform automatically a task that would otherwise take me a good ten seconds to do by hand.

Sunday, June 8, 2008

Enterprise Man Boobs

Jim Webber and Martin Fowler gave a hilarious talk about the Enterprise Service Bus at QCON 2008 entited "Does My Bus Look Big In This?". You can watch the whole thing, with slides at InfoQ, which has joined the TED talks as one of my favorite places to waste my employer's valuable time.

The talk is fairly content free, but they are very animated and have a lot of surprisingly funny jokes scattered throughout, including an improv sequence on man boobs which is destined to spawn an entry in the Urban Dictionary. Their point is that the Internet is a better model for Enterprise Integration than whatever model is behind things like SOAP. I couldn't agree more: I've been reading Steve Vinoski's blog just to get that vicarious thrill which comes from someone smarter than you proving that something you hate does suck.

Saturday, February 23, 2008

Notes for "The Landscape of Parallel Computing Research"

Introduction

First, the paper itself: The Landscape of Parallel Computing Research: The View From Berkeley

Conclusions of interest to software engineers:

  • The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems.
  • To maximize programmer productivity, future programming models must be more human-centric than the conventional focus on hardware or applications.
  • To be successful, programming models should be independent of the number of processors.
  • Traditional operating systems will be deconstructed and operating system functionality will be orchestrated using libraries and virtual machines.

The thesis of the paper is that real world applications and hardware are naturally parallel. Therefore we need a programming model, system software and supporting architectures that are naturally parallel. This will require a fundamental re-design.

The paper lists some items of conventional wisdom that have been, or will be, superceded soon. Here are a few of interest to software engineers:

  1. Old: Multiply is slow, but load and store are fast.
    New: Load and store are slow, but multiply is faster (by as much as two orders of magnitude).
  2. Old: Don't bother parallelizing your application, as you can just wait a little while and run it on a much faster sequential computer.
    New: It will be a very long wait for a faster sequential computer (perhaps five years).

The rest of the paper is organized around four themes: applications, programming models, hardware and evaluation.

Applications

The paper lists 13 "dwarfs", which are patterns of computation (within a processor) and communication (between processors) that characterize certain classes of applications. The first seven are numerical methods from scientific computing:

  1. Dense Linear Algebra
  2. Sparse Linear Algebra
  3. Spectral Methods
  4. N-body Methods
  5. Structured Grids
  6. Unstructured Grids
  7. Monte Carlo (generalized to "MapReduce" in paper and considered "embarrassingly parallel")

In addition to those seven, the papers authors add these six:

  1. Combinational Logic (CRC codes, Hashing, Encryption)
  2. Graph Traversal (Quicksort, Decision Trees)
  3. Dynamic Programming
  4. Backtrack and Branch-and-Bound
  5. Construct Graphical Models (Bayesian Networks, Hidden Markov Models)
  6. Finite State Machine (may be "embarrassingly sequential")

There is an interesting note at the end of the section on dwarves:

In the era of multicore and manycore. Popular algorithms from the sequential computing era may fade in popularity. For example, if Huffman decoding proves to be embarrassingly sequential, perhaps we should use a different compression algorithm that is amenable to parallelism.

The paper goes on to describe two ways to compose the dwarfs. The first is via "temporal distribution" which involves using a common set of processors for the composed dwarfs with some sort of time sharing. The other is via "spatial distribution" in which the different dwarfs run on separate (sets of) processors. The paper makes it clear that optimizing the composition of dwarfs can be incredibly difficult in pratice due to issues with loose coupling (to promote code reuse), data structure translation (the need for one dwarf to use the same data with a different structure) and the ability to predict the format of the data at the start of the computation.

The authors cite an Intel Study that lists three categories of computing that are expected to increase in demand:

  • Recognition: machine learning approaches in which data is examined and models are constructed.
  • Mining: searches the web to find instances of the models.
  • Synthesis: creation of new models as in graphics.

The paper contains a very interesting graphic (on page 16) that maps some common tasks from various disciplines to a set of basic mathematic primitives (which are, presumably, what the hardware manufacturers are targetting).

Hardware

The bulk of the hardware discussion is about MPU based systems.

Section 4.1.3 has an interesting discussion about heterogeneous collections of processors in the same system. The idea is that the sequential parts of a program can be the bottleneck on a parallel architecture. An interesting solution for that problem is to have a few more powerful processors in the system that are dedicated to sequential jobs. They refer to Amdahl's Law to illustrate the point. The question of how to identify and schedule these sequential elements is left open, but they admit that the overhead of managing a heterogenous collection of processors could be prohibitive.

Section 4.3 discusses the interconnections between processors in a manycore system. The make a distinction between point-to-point communication for two processors and global communication that involves all the processors. They claim that the latter is composed of smaller messages that are latency-bound. The former are somewhat larger and are bandwidth-bound as a result. They posit using a different set of interconnects for each of these use cases.

The bandwidth-bound communication can take place using internal packet switch-like networks on the chip. Then the topology of the interconnect can be modified on demand to meet the needs of a particular application. For latency-bound communication they recommend something like what BlueGene/L does: a tree network for collective communcation (BlueGene/L also uses a torus interconnect for point-to-point messages).

Section 4 discusses communication primitives for MPU systems, which they are now calling "chip-scale multiprocessors" or CMPs (no wikipedia link on that name, yet). They claim that CMPs have higher inter-core bandwidth and lower latency. They also claim that they can offer new lightweight coherency primitives that operate between cores. They use these claims to motivate the notion that programming a CMP is fundamentally different than programming an SMP.

The first stop is to note that cache coherency schemes will need an overhaul to handle the number of processors in a CMP. They don't give any clear examples of what they mean by this, except to note that on-chip caches with "private or shared configurations" could work. They go on to discuss software transactional memory as an approach to synchronization between processors. It isn't clear, at this point, what the memory model for CMP systems looks like. I.e., it isn't obvious how much of the memory is shared (as it is in a traditional SMP system) and how much of it is private to each processor. The note on message passing in section 4.4.5 seems to indicate that the memory is shared, and that it is more efficient in CMP than in SMP. This seems to be because the CMP systems are implemented on a single chip, whereas SMP systems use multiple chips plugged into the same memory subsystem.

Programming

The section starts with a well-written piece on why the tradeoff between implementation efficiency and programmer productivity is so difficult. If the level of abstraction level is too high, it becomes impossible to optimize the performance. If the abstraction level is too low all the time is spent dealing with the underlying details. In both cases, productivity suffers, and IT productivity is already low enough thanks to reddit and digg.

The paper separates programming languages into three groups: hardware-centric (IXP-C), application-centric (MatLab) and formalism-centric (Actor-based). They describe these as related to efficiency, productivity and correctness (in that order). They decry the lack of effor in devising languages that have proven effects on the psychology of the people writing programs in them. They give, as an example, the notion that software transactional memory is easier to reason about than other traditional concurrency primitives. They promote the use of actual user studies to validate the efficacy (or lack thereof) of languges. This section is far too light on details, and misses major issues with leaky abstractions (i.e., even with "simple" models there are cases where a real knowlege of the underlying complexity is crucial). It is difficult to shield programmers from the reality of the hardware they are working with, although god knows Java is doing its level best here.

There is a brief section on making the programming model independent of the number of processors. They describe MPI as the dominant paradigm in parallel programming for scientific computing, and how it requires the programmer to explicity map tasks to processors. The paper doesn't have a lot to offer here, other than the fact that this is still a very open research area, and it would be nice if it wasn't.

As an aside, it is interesting that they don't mention languages like Erlang, as these are the hot new thing in distributed/parallel computation (at least, for "alpha geeks", as Tim O'Reilly loves to call them). They do briefly mention the Actor model, which has roots back into the seventies. Erlang is a member of this family, which they refer to as a "formalism-centric" approach, even though it is also very application-centric (at least, it was for Ericsson).

The next few sections deal with compilers and operating systems. They discuss "auto-tuners" which can automatically tune a kernel for use on a system. It does this by mutating the optimization parameters and testing until it finds an ideal (or near ideal) setting for the target hardware. This seems very promising, although it isn't clear how widely applicable it is. They go on to discuss the need for a more flexible OS that allows applications to only use the capabilities they really need. They point to virtual machines as an indication that the time has come for this sort of optimization.

This is the section I was most looking forward to, and I found it disappointing. It felt like there was more hand-waving than real science here. I agree, in principal, that programming language development should be the subject of user testing, but it isn't clear (at all) how one would go about such a monumental project, or that the result would be any better than our current languages. In the end, the humans that program computers are significantly more flexible than the computers themselves.

The real wins, in that area, have been tools like compilers and auto-tuners, as they point out. These are the things that allow us to work at a much higher level of abstraction. Perhaps it is the case that the growing power of computers will continue that trend, so that close optimization won't be as important as productivity. The success of languages like Ruby seems to indicate that this is true, as it's performance is rather awful. In that case, however, we're making an explicit trade off between programmer productivity and implementation efficiency, with little regard for the latter. The paper seems to want to take a more nuanced position, but doesn't back it up with enough examples. The example of software transactional memory (which is one of the few they use) is good, but limited. It isn't clear that you can wholly replace existing models of concurrency with an STM model (or that you would want to if you could).

Links

Friday, April 27, 2007

Hello Scala: Reservoir Sampling

I've started to climb the rather steep learning curve for Scala. The documentation for Scala is mostly fairly high level, and there aren't as many code samples as I would have liked. I'm hoping to help out with that problem here. I've selected what I think is a fairly interesting problem and written Scala code to solve it in three different ways. I had a lot of fun doing it, and I think the code really shows how flexible this language is. I actually wrote it in another dozen ways, but these are the ones that turned out best (some of the others were really horrendous). Hopefully some of you will find this helpful in your own efforts to learn Scala.

I have interviewed nearly four hundred software engineers in the last four years. As a result I have developed an unhealthy fascination with interview questions, and I have a lot of opinions on their relative merit. A friend who has been out interviewing sent me this one recently:

Write a program to select k random lines (with uniform probability) from a file in one pass. You do not know n (the total number of lines) ahead of time.

This is a tough problem if you've never heard it before (or don't happen to be quite good with probability). It took me about twenty minutes to come up with a correct algorithm (along with a half dozen incorrect ones). The correct algorithm had a righteous feeling to it, but I had to enlist the help of a friend (with a degree in math) to help me prove it. I don't think this is a particularly fair question to ask in an interview, unless you are willing to offer a lot of help. A good interview question has an involved algorithm, but an easy way to check its correctness. This is the opposite: the algorithm is fairly simple, but checking its correctness is time consuming.

It turns out that there are a number of correct algorithms to this problem, and some of them run in asymptotic time less than O(n)! The problem is one of a set of "reservoir sampling" algorithms, and there have been a number of papers written in the area. The earliest one that I could find was from Jeffrey Scott Vitter and was published in the ACM Transactions on Mathematical Software, Vol. 11, No. 1, March 1985, Pages 37-57. It lists four algorithms, which it refers to as R, X, R and Z. The average CPU time for Z is O(k(1 + log(n/k))). That probably isn't much better than the O(n) for algorithm R, since the I/O time is likely to dominate. In a situation where you are reading a stream of data from memory, or a network card, it could be a big savings, however.

This problem seemed custom designed for a first project in Scala. It includes a lot of different features: file I/O, simple mathematics, conditional logic and some iterative processing. It is also quite interesting, and likely to be very useful for things like log parsing and file sampling (as input to unit tests, for instance). I decided to implement algorithm R since it was the most straightforward.

The intuition behind algorithm R is something like the following. Any correct solution must return min(k,n) lines. In addition, all n lines in the file must have an identical probability of being selected: k / n. The first constraint can be satisfied by just choosing the first k lines with probability one. Each line after the k'th should be selected with probability k / i where i is the line number of the file. If i == n then we have selected the last line with the correct probability. If we select the i'th line, then we choose a random line from the already selected set to replace.

I started by writing the algorithm using standard imperative constructs. Using this approach with Scala produces code that looks a lot like Java:

import java.io._
import Console._
import Math._

object ReservoirSampleImperative {
  def algorithmR(readers: Iterator[BufferedReader], k: int) : Array[String] = {
      val a = new Array[String](k)

      var t = 0

      while (readers.hasNext) {
          val reader = readers.next
          var line = reader.readLine
          while (line != null) {
              if (t < k)
                  a(t) = line
              else if (random < (k.toDouble/t.toDouble))
                  a((random * k.toDouble).toInt) = line

              t = t + 1
              line = reader.readLine
          }
      }

      a
  }

  def main(args: Array[String]) : Unit = {
      val iter = args.elements
      val k = iter.next.toInt

      val readers = if (args.length > 0)
          iter.map (f => new BufferedReader(
              new FileReader(f)))
      else
          Array[BufferedReader](new BufferedReader(
              new InputStreamReader(System.in))).elements

      val a = algorithmR(readers, k)
      for (val e <- a) println(e)
  }
}
This version actually has a lot going for it. Neither the Array class (which is where it gets its arguments) nor the Java I/O modules are very functional, so the OO/imperative approach works very well with them. Writing the program in a more functional style required that the I/O be converted to a Stream:
import java.io._
import Math._
import Console._

object ReservoirSampleFunct {
  def inputStream(i: Iterator[BufferedReader]) : Stream[String] = {
      def inputStreamString(b: BufferedReader) : Stream[String] = {
          val line = b.readLine()
          if (line == null)
              inputStream(i)
          else
              Stream.cons(line, inputStreamString(b))
      }

      if (i.hasNext)
          inputStreamString(i.next)
      else
          Stream.empty
  }

  def algorithmR(k: int)(p: (Int,Array[String]), s: String) : (Int, Array[String]) = {
      val (i,a) = p

      if (i < k)
          a(i) = s
      else if (random < (k.toDouble/i.toDouble))
          a((random * k.toDouble).toInt) = s
      (i+1, a)
  }

  def main(args: Array[String]) : Unit = {
      val i = args.elements
      val k = i.next.toInt

      val readers = if (i.hasNext)
          i.map (f => new BufferedReader(new FileReader(f)))
      else
          Array[BufferedReader](new BufferedReader(
              new InputStreamReader(System.in))).elements

      val (_,a) = inputStream(readers).foldLeft (
          (0, new Array[String](k))
      ) (algorithmR(k))

      for (val e <- a) println(e)
  }
}

There is still a lot of non-functional, side-effect producing code here. I'm using Array for both the command line arguments and the list of k sampled lines. I think that is the real power of Scala, actually. Both of those things are cases where having side effects really makes the programming easier. I could have written a helper function to replace on of the k lines in a List, but I think the resulting code would have been longer and no more understandable (in fact, probably less).

A good friend, who has been at this longer than me, contributed this version of the code, which uses an implicit constructor. He notes that it probably isn't a great idea to do this in large scale production code. The proliferation of implicit constructors can lead to some truly bizarre behavior due to unintended conversions. Still, it does nicely simplify some of the code from my previous example:

import java.io._
import Console._
import Math._

object ReservoirSampleImplicit {
  implicit def readerToStream(r: BufferedReader) : Stream[String] = {
      val line = r.readLine()
      if (line == null)
          Stream.empty
      else
          Stream.cons(line, readerToStream(r))
  }

  def main(args: Array[String]) : Unit = {
      val k: Int = args(0).toInt
      val r = new BufferedReader(new FileReader(args(1)))

      def algorithmR(p: (Int,Array[String]), e: String) : (Int, Array[String]) = {
          val (i,a) = p

          if (i < k)
              a(i) = e
          else if (random < (k.toDouble/i.toDouble))
              a((random * k.toDouble).toInt) = e

          (i+1,a)
      }

      val (_,rs) = r.foldLeft (
          (0, new Array[String](k))
      ) (algorithmR)

      for (val r <- rs) println(r)
  }
}
That's it for now. I'm going to try to tackle some of Scala's Actor libraries for the next week or so. I'll post here if I come up with any interesting applications of them. If you know Scala and have a better way to solve the Reservoir Sampling problem, I would love to hear about it. Scala is a huge language with a lot of features, and I don't feel like I've even started to explore the space of possible ways to solve this problem using it.