(Attention conservation notice: The two most important things I link to here can also be found in Neal Gafter’s recent “Is the Java Language Dying?” post, which you might well find more interesting anyway.)
Parallelization and multithreading were topics that kept coming up at JavaPolis — largely for the pragmatic reason that, as James Gosling pointed out, while Moore’s Law is still going strong, chipmakers seem to be running out of ways to turn more transistors into higher clock rates, and instead are using those transistors to pack on more cores. Most any desktop or laptop you buy this year is going to be dual-core, and servers are looking at four to eight. Of course, you can get pretty good value out of two cores just by websurfing in the foreground and compiling in the background, but a quick back of the envelope, double-every-eighteen-months calculation gets you hundred-core consumer machines by 2017. What then?
A hundred cores might sound silly; but I expect the idea of a PS3 with eight specialized cores running at 3.2 GHz would have sounded silly when the first 33-Mhz PlayStation shipped, too. Say it’s only fifty, though. Say it’s twenty. Say it’s ten. Most developers wouldn’t have a clue how to use ten cores efficiently, let alone a hundred. As a Swing developer, I generally feel pretty pleased with myself if I can keep track of more than just “the event dispatch thread” and “everything else.” We’re all going to have to get a lot better at multithreading if we want to make good use of next year’s hardware.
Gosling was pretty pessimistic. He called massive concurrency
The scariest problem out there… Thirty years of people doing PhD theses on parallel programming, and mostly what they’ve done is hit their heads against brick walls.
Well, there’s brick walls and brick walls. Luckily (I know I was complaining about not having a PhD from Carnegie Mellon earlier, but for purposes of this discussion, we’ll say “luckily”) I’m not doing a PhD thesis, so I can hope for other, harder heads to knock these particular walls down for me. And, as it happens — now we get to the point of this post, if you made it this far — harder heads than mine are working on it:
- Doug “concurrency” Lea’s Fork/join framework:
- A lightweight system for parallelizing tasks that can easily be divided into smaller subtasks. (PDF here, javadoc here. The code’s available as part of Lea’s <a href="util.concurrent package, though you’ll probably want to grab just the source for those classes since a lot of the rest of the package —
Closures and whatnot — is now superseded by standard libraries.) This came up a lot in JavaPolis, mostly in the context of the closures argument, what sort of language features would make this kind of framework easier to use, whether we should all just be programming in Scala, and so on. It should be standard in Java 7.
- Haller and Odersky’s Actors model (PDF):
- A single abstraction that “unifies threads and events”. (Apparently this was popularized by Erlang.) If I get this right, actors are worker objects that encapsulate a thread, and can receive and react to arbitrary messages either synchronously (event-style) or asynchronously (thread-style). I haven’t dug too far into this paper. Haller’s got an actors tutorial that may be more accessible. Of course, it’s all in Scala, and the fact that it’s not that easy (perhaps not even possible?) to do in straight Java is one of the motivations behind BGGA closures. There’s also a long Wikipedia article on the general concept.
I keep stretching for some sort of pun involving threads, Lobachevsky’s parallel postulate, and Tom Lehrer’s well-known assertion that Lobachevsky was the greatest mathematician “who ever got chalk on his coat.” (MP3 here.) Lucky for you I’m not reaching it.