Tag Archives: Donald Knuth

Threads, pipelines and the demise of Moore’s Law

I came across an interview with Donald Knuth from June of this year, in which he throws some cold water on the current trend toward multicore computers. An excerpt:

…I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won’t be surprised at all if the whole multithreading idea turns out to be a flop…

Strong words, but there’s a little tidbit later in the article suggesting where Knuth’s sympathies really are:

…So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it’s a pipe dream. (No—that’s the wrong metaphor! “Pipelines” actually work for me, but threads don’t. Maybe the word I want is “bubble.”)

I can relate to that. Being an old-school, PDP-11 era C programmer I never really grasped the intricacies of synchronization and how to write a decent threaded algorithm. Threading was never intuitive for me. I understood it at an abstract level, but it always felt like thread libraries and APIs were an awful invention, forcing programmers to contort their code in ways that rarely matched the actual application.

I have far less trouble wrapping my head around streaming and pipelining. The model of having multiple processes independently processing streams of data is intuitive because it exists everywhere around us, in the real world (see my next posting on that). Even very complex systems with nested pipelines of varying rates can be understood conceptually by programmers, and by non-programmers as well.

Over on Dobbs Code Talk there’s a blog post from James Reinders titled Pipelines/Streams offer easy parallel programming. In the posting Reinders offers the following concepts:

The “magic” which makes this all so easy for parallel programming comes from three things:

  1. to be parallel you need independent work to run in parallel: if you pipeline your work (streaming data) and you have no interdependencies other than the data streams themselves (no global side-effects) you get exactly that: independent work to run in parallel
  2. the pipeline stages themselves can be broken up to run in parallel by either data parallelism, or possibly a pipeline of their own (so nested parallelism is important)
  3. the very very sticky problem of data placement, which becomes a more and more severe problem in the future, is solved implicitly (the migration of data is very clear and very clean)

The above makes parallel programming using pipelined processes and streaming data seem rather simple and obvious. We do a lot of that kind of programming around here (using Impulse C) and yes, it’s a very good approach when targeting massively parallel architectures such as FPGAs. Maybe it’s the only practical method at the moment. Personally I would not characterize parallel programming and the design of highly pipelined algorithms as “easy”, but tools available today, including ours, make it practical for software programmers to write such programs and target non-traditional computing devices. The analysis and optimization of deeply pipelined, high-performance applications is still a significant challenge, but this challenge can be met with improved tools, and with the more intuitive programming models that streaming and pipelining represent.

Advertisements

Leave a comment

Filed under Reconshmiguration