His code not ‘functional’ or ‘elegant’

In the middle of another gold card, this one on Actors (PDF), a concurrent programming abstraction that Scala borrowed from Erlang. The Scala short tutorial on actors isn’t much help, at least not to someone who’s spent as little time with Scala as I have; mostly Mr. Haller just seems to show you the source code from the examples directory and say “see”? But I persevere.

In the course of said perseverance, I did run across one interesting link, which I post by way of bookmarking: Functional Java, “an open source library that aims to prepare the Java programming language for the inclusion of closures [and] serves as a platform for learning functional programming concepts by introducing these concepts using a familiar language.” Which sounds like exactly what Code Monkey needs.

Substantial forms of being

So I gave my pizza lunch talk on monads today, having over the past couple of weeks snuck in considerably more than a single Gold Card day’s worth of work. I think it went fairly well and was mostly comprehensible, though on the comprehensibility front it probably helped that a lot of my statements were wrapped in “I’m not sure what…”, “I’m not sure whether…” and “I’m not sure why…” —

— which, if you think about it, is really just a kind of Option (a.k.a. Maybe), so it’s totally appropriate.

At least, that’s (provisionally) my story and I’m (for now) sticking to it.

I got as far (I think) as figuring out what a monad is (mostly), and (I think) how the Scala List and Option monads work (mostly), largely by porting (partially) List and Option to Java. I mean to turn the talk into a post here, but that’ll take a couple of days — maybe longer if I get ambitious about cleaning up the code examples.

What I didn’t get to in any detail was Dan’s question about state, which was sort of the main point of this investigation, along with I/O and error handling. Always leave yourself some work for tomorrow, I’ve been told….

Unit, bind, flatten

I’m 3/4 of the way through my Gold Card day, and I think I can finally tell a monad from a modron, and even from a maenad. Whether this will translate into a presentation anyone else can understand is still an open question.

Meanwhile, I’ve discovered:

  • A whole folder of monad examples in the Scala distribution.
  • That trying to do higher-order functional programming in straight Java does, in fact, really blow. An example that takes 74 lines in Scala takes… well, I don’t really know how many lines it takes in straight Java, because nine classes and 195 lines in, I’m only up to line 37 in the original (single) Scala file. We could extrapolate and guess 390 lines, but I wouldn’t bet on it. Maybe those BGGA closures aren’t such a bad idea after all…
  • James Iry’s “Monads are Elephants,” an introduction to Scala monads that’s got more words and less math than Burak Emir‘s. (Nothing against math — or Dr. Emir — but we language majors like our human-readable variable and function names.)
  • Scala for Java Refugees,” a nice tutorial from Daniel Spiewak.

And, last but not least:

  • That “Introductions to monads are bit of cottage industry on the Internet.” (James Iry again.)

So much for my plan to get rich writing Monads for Dummies.

Java vs. C#: More fun with initializers

Or, proof by example that C# isn’t just Java with different capitalization conventions.

On the heels of Neal Gafter’s ice cream puzzler we have the less closure-rific but still interesting “Why do initializers run in the opposite order as constructors?” from Eric Lippert. Here’s my Java version of Eric’s C# code:

class Foo {
	public Foo(String s) {
		System.out.printf("Foo constructor: %1$sn", s);
	}
}

class Base {
	private final Foo baseFoo = new Foo("Base initializer");

	public Base() {
		System.out.println("Base constructor");
	}
}

class Derived extends Base {
	private final Foo derivedFoo = new Foo("Derived initializer");

	public Derived() {
		System.out.println("Derived constructor");
	}
}

class Program {
	public static void main(String[] args) {
		new Derived();
	}
}

I’ll spare you the suspense and just print the answer — the Java answer, that is.

Foo constructor: Base initializer
Base constructor
Foo constructor: Derived initializer
Derived constructor

Whereas the C# code equivalent of the above, Eric’s original, prints:

Foo constructor: Derived initializer
Foo constructor: Base initializer
Base constructor
Derived constructor

What’s going on?

The Java code does this because when a Java object’s initialized the JVM works its way down through its superclasses, starting at the root (that is, Object), and for each class first runs the initializers, then the constructor. This can have some wacky side-effects. For instance, sooner or later everyone gets the bright idea to define an abstract method in the base class and call that method from the base class constructor — which works fine until some subclass’s concrete implementation depends on a field that’s only initialized in that subclass, and suddenly you’re getting NullPointerExceptions in impossible-looking places.

This happens whether the field’s initialized in the subclass constructor or in a subclass initializer, and while it’s fairly obvious what’s going on in the constructor case, it’s a little more confusing the first time you have, say,

private final long timestamp = new Date().getTime()

come out 0 (or null, if you use a capital-L Long) when you know your clock’s not set to January 1st 1970 — and then later on come out 1203338390828 or whatever, even though final fields are supposed to be immutable.

The constructor case, I think we’re stuck with. The initializer case, though, the folks at Microsoft apparently decided they were sick of. So C# instead runs all the initializers in reverse order (subclass to superclass), and then runs all the constructors (in the order you’d expect). This means final fields in C# really are final, or rather, readonly fields really are read-only — they’ll only ever have one value, no matter when you look at them. [Looks like I didn’t have that quite right — see Eric’s comment below.]

Now I wonder what happens in ActionScript? The Adobe folks claim my const bug in FlexBuilder is fixed; I’ll have to download the latest build and see.

Comments closed due to spam.

Color-flavor locking breaks chiral symmetry

(Attention conservation notice: Post not actually about quantum chromodynamics.)

If you listen to the Q&A for Josh Bloch’s Closures Controversy talk at JavaPolis, you’ll hear me ask a silly question around minute six — namely, whether Josh should see the BGGA closures proposal as a benefit rather than a hazard, on account of the material they’ll provide for the Java Puzzlers he and Neal Gafter are so fond of. Well, it looks like Neal’s getting a head start with “Closures Puzzler: Neapolitan Ice Cream.”

This looks like a good opportunity to stretch my brain, and also try out the fancy syntax highlighter that Rahel discovered. Let’s see how it goes.

Here’s the puzzler. What does it print?

enum Color {
  BROWN(Flavor.CHOCOLATE),
  RED(Flavor.STRAWBERRY),
  WHITE(Flavor.VANILLA);

  final Flavor flavor;

  Color(Flavor flavor) {
    this.flavor = flavor;
  }
}

enum Flavor {
  CHOCOLATE(Color.BROWN),
  STRAWBERRY(Color.RED),
  VANILLA(Color.WHITE);

  final Color color;

  Flavor(Color color) {
    this.color = color;
  }
}

class Neapolitan {

  static  List<U> map(List list, {T=>U} transform) {
    List<U> result = new ArrayList<U>(list.size());
    for (T t : list) {
      result.add(transform.invoke(t));
    }
    return result;
  }

  public static void main(String[] args) {
    List colors = map(Arrays.asList(Flavor.values()), {
      Flavor f => f.color
    });
    System.out.println(colors.equals(Arrays.asList(Color.values())));

    List flavors = map(Arrays.asList(Color.values()), {
      Color c => c.flavor
    });
    System.out.println(flavors.equals(Arrays.asList(Flavor.values())));
  }
}

Now, right away we can see there’s something suspicious going on here with the order of initialization — Color requires Flavor and Flavor requires Color; who wins? This is the sort of question I probably should be able to answer, but in practice can never remember until I see a symptomatic bug. So let’s find a symptomatic bug:

for (Color c: Color.values()) {
  System.out.println(c + ": " + c.flavor);
}

for (Flavor f: Flavor.values()) {
  System.out.println(f + ": " + f.color);
}

Result:

BROWN: CHOCOLATE
RED: STRAWBERRY
WHITE: VANILLA
CHOCOLATE: null
STRAWBERRY: null
VANILLA: null

Say what now?

But if you think about it, it makes sense. Color is declared first, but because Color requires Flavor, Flavor is actually initialized first — so when you get to, say, CHOCOLATE(Color.BROWN), well, Color hasn’t been (can’t have been) fully initialized, so Color.BROWN is null.

At this point I’m tempted to just say my answer is:

false
true

…but let’s make sure there isn’t more to it than that. So far I’ve been too lazy to download and install the BGGA closures prototype, so let’s see how hard it is to fake this in straight Java:

class Neapolitan {

  static interface Transform {
    U invoke(T t);
  }

  static  List<U> map(List list, Transform transform) {
     List<U> result = new ArrayList<U>(list.size());
     for (T t : list) {
       result.add(transform.invoke(t));
     }
     return result;
  }

  public static void main(String[] args) {
    List colors = map(Arrays.asList(Flavor.values()),
      new Transform() {
        public Color invoke(Flavor t) {
          return t.color;
        }
      });
    System.out.println(colors.equals(Arrays.asList(Color.values())));

    List flavors = map(Arrays.asList(Color.values()),
      new Transform() {
        public Flavor invoke(Color t) {
          return t.flavor;
        }
      });
    System.out.println(flavors.equals(Arrays.asList(Flavor.values())));
  }
}

Well, that didn’t seem so hard — what do we get?

true
false

Say really what now?

Let’s put a print statement in that map() method and see if we can figure out what’s going on.

  static  List<U> map(List list, Transform transform) {
     List<U> result = new ArrayList<U>(list.size());
     for (T t : list) {
       U u = transform.invoke(t);
       System.out.println(t + " => " + u);
       result.add(u);
     }
     return result;
  }
CHOCOLATE => BROWN
STRAWBERRY => RED
VANILLA => WHITE
true
BROWN => null
RED => null
WHITE => null
false

Holy late binding, Batman! The order of initialization’s reversed!

And I think I know why. To demonstrate, let’s take those original print loops and reverse them:

for (Flavor f: Flavor.values()) {
  System.out.println(f + ": " + f.color);
}

for (Color c: Color.values()) {
  System.out.println(c + ": " + c.flavor);
}
CHOCOLATE: BROWN
STRAWBERRY: RED
VANILLA: WHITE
BROWN: null
RED: null
WHITE: null

And behold! we get the same effect:

Why? Because classes are loaded at runtime. I said up there that Color required Flavor, and I was right. But whether Flavor or Color is loaded first has nothing to do with declaration order, and everything to do with which one happens to get called first in the code. Call Color first (as in the first set of print loops), and Flavor gets incompletely initialized; call Flavor first (as in the second set of print loops, and in the puzzler itself), and it’s Color that suffers.

(Note that the List reference doesn’t count — generics are erased at run-time, so at run-time when the JVM gets to the declaration of colors, it’s just a plain old untyped List. The Flavor.values() call is the first time one of these classes actually gets loaded.)

The moral of the story? If you ask me, it’s — as with so many of these — don’t code like this. ๐Ÿ™‚ But also: just because that enum variable looks vaguely constant-ish doesn’t mean the compiler’s going to inline it for you, and if the compiler’s not going to inline it for you, you have to think about how the code behaves at run-time.

Mostly, though, just don’t code like this.

(Now, I just hope, in the name of the Knights of the Lambda Calculus, that bringing in BGGA closures doesn’t change the answer…)

(Update: Neal’s posted some solutions, encapsulating flavor and color and deferring the resolution till run-time; worth a look.)

Comments closed due to spam.

New directions in parallelism

(Attention conservation notice: The two most important things I link to here can also be found in Neal Gafter’s recent “Is the Java Language Dying?” post, which you might well find more interesting anyway.)

Parallelization and multithreading were topics that kept coming up at JavaPolis — largely for the pragmatic reason that, as James Gosling pointed out, while Moore’s Law is still going strong, chipmakers seem to be running out of ways to turn more transistors into higher clock rates, and instead are using those transistors to pack on more cores. Most any desktop or laptop you buy this year is going to be dual-core, and servers are looking at four to eight. Of course, you can get pretty good value out of two cores just by websurfing in the foreground and compiling in the background, but a quick back of the envelope, double-every-eighteen-months calculation gets you hundred-core consumer machines by 2017. What then?

A hundred cores might sound silly; but I expect the idea of a PS3 with eight specialized cores running at 3.2 GHz would have sounded silly when the first 33-Mhz PlayStation shipped, too. Say it’s only fifty, though. Say it’s twenty. Say it’s ten. Most developers wouldn’t have a clue how to use ten cores efficiently, let alone a hundred. As a Swing developer, I generally feel pretty pleased with myself if I can keep track of more than just “the event dispatch thread” and “everything else.” We’re all going to have to get a lot better at multithreading if we want to make good use of next year’s hardware.

Gosling was pretty pessimistic. He called massive concurrency

The scariest problem out there… Thirty years of people doing PhD theses on parallel programming, and mostly what theyโ€™ve done is hit their heads against brick walls.

Well, there’s brick walls and brick walls. Luckily (I know I was complaining about not having a PhD from Carnegie Mellon earlier, but for purposes of this discussion, we’ll say “luckily”) I’m not doing a PhD thesis, so I can hope for other, harder heads to knock these particular walls down for me. And, as it happens — now we get to the point of this post, if you made it this far — harder heads than mine are working on it:

Doug “concurrency” Lea’s Fork/join framework:
A lightweight system for parallelizing tasks that can easily be divided into smaller subtasks. (PDF here, javadoc here. The code’s available as part of Lea’s util.concurrent package, though you’ll probably want to grab just the source for those classes since a lot of the rest of the package — Closures and whatnot — is now superseded by standard libraries.) This came up a lot in JavaPolis, mostly in the context of the closures argument, what sort of language features would make this kind of framework easier to use, whether we should all just be programming in Scala, and so on. It should be standard in Java 7.
Haller and Odersky’s Actors model (PDF):
A single abstraction that “unifies threads and events”. (Apparently this was popularized by Erlang.) If I get this right, actors are worker objects that encapsulate a thread, and can receive and react to arbitrary messages either synchronously (event-style) or asynchronously (thread-style). I haven’t dug too far into this paper. Haller’s got an actors tutorial that may be more accessible. Of course, it’s all in Scala, and the fact that it’s not that easy (perhaps not even possible?) to do in straight Java is one of the motivations behind BGGA closures. There’s also a long Wikipedia article on the general concept.

I keep stretching for some sort of pun involving threads, Lobachevsky’s parallel postulate, and Tom Lehrer’s well-known assertion that Lobachevsky was the greatest mathematician “who ever got chalk on his coat.” (MP3 here.) Lucky for you I’m not reaching it.

Link roundup

Sad, I know. I had dreams of getting some serious blogging done while I was on vacation, but they all vanished in a haze of roadside attractions and chile rellenos. Instead, you get these tidbits:

  • Steve Yegge says “the worst thing that can happen to a code base is size.” I don’t know why he had so much trouble with Eclipse — I’m looking at… find, find, wc, awk… just under 7000 classes and a million lines of code and it seems to work fine for me — but he’s got a point. At my last job, with about half that much code, we were able, just barely, to take our organically grown mess and decompose it into sensible layers and functional units, when a new choice of tools (WebLogic Workshop) forced us to, and it took half a dozen developers a month. If we tried it here, I wouldn’t be too suprised to see it take four.
  • In the wake of the Javapolis closure discussions (notes forthcoming — no, really), Bruce Eckel thinks that “Once you bind yourself to backwards compatibility with anything, you must be prepared for your language to be corrupted as you add features. If Java is unwilling to break backwards compatibility, then it is inevitable that it will acquire both unproductive complexity and incomplete implementation of new features.” True dat. But up against it, well — did I mention a million lines of code? It seems like there’s an unsolved problem somewhere in there, about the evolution of large systems. (And no, Ben, switching to a job where everything is a small one-off consulting project isn’t always the answer. ๐Ÿ™‚ )
  • Martin Fowler finds that clients (as in, customers) can’t be trusted to keep the test bar green. Maybe ThoughtWorks ought to write it into their contracts that they’ll charge for the time it takes to get the bar green again before making any enhancements.
  • Tim Bray has some thoughts on providing wireless at conferences. Javapolis was my first experience with a fully (rather flakily, especially when two thousand people would all simultaneously leave a session and start checking email, but) wifi-enabled conference, and overall I agree with Bray that it’s a big plus. This isn’t school; your audience doesn’t have to be here, and if you can’t keep them interested, you shouldn’t be speaking. Mostly I wish I’d screwed around more during the conference, not less — maybe I’d have discovered in time to ask James Ward about it that FlexBuilder (contra the Flex docs) doesn’t let you assign to a const field in the constructor.

When I have time and I’m sufficiently awake, I’ll post something interesting, like Ben’s experiment with closures in JavaScript.

13949712720901ForOSX (updated)

(Crossposted to Chrononautic Log.)

I’m hoping I’ll have time to post the handful of pictures I took in Antwerp before I head back to Pacific Standard Time on Sunday, and maybe give a more full report on the trip. But one thing my voyage to the City of Eternal Night did was remind me was that once upon a time (in the age of innocence, you know, before Condé Nast bought Wired magazine) I thought technology was fun. It also made me think maybe I wasn’t, back then, just being young and Slashdot.

I came back to programming after grad school with the idea that it was an interlude, that I’d work for a year or two, pay off some bills, and then do something classy, like move to Berkeley or Ann Arbor and get myself a history PhD. Seven years, one popped tech bubble, one layoff, three jobs and a lot of credit cards later, I realize I’ve still been thinking of my day job that way, as something temporary, despite all evidence to the contrary.

The way I see it, I’ve got two realistic choices.

  1. There are six million professional Java programmers working today. All six million of those programmers are busily pumping new code into everything from the back ends of hedge funds to the front end of the Large Hadron Collider, and if the past is any guide, a lot of that code will be around, and needing maintenance, for a long, long time. I’d say right now, conservatively, I’m probably in the top half million. I can probably keep shuffling from day job to day job, without giving any serious thought to career security, for another fifteen or twenty years — at which point I’ll suddenly become totally unemployable, but we can burn that bridge when we get to it (and anyway, with luck I’ll be in management by then).
  2. From the edge of the crowd, it’s easy to lose sight of the fact that somewhere in the middle things are actually happening, that interesting people are doing interesting work, that writing software can be interesting in itself and not just necessary drudgery on the way to having software that (one always hopes, though in practice it’s often far, far from the case) does something interesting. The question is how to get to that place and become one of those people — at least, to find out where I really fall among that six million. (Apart from the obvious way, that is, which would be to start life over fifteen years ago and get a PhD in computer science from Carnegie Mellon.)

At minimum, option 2 probably involves — at some point — moving back to Silicon Valley. (Compare: film::LA, publishing::NYC, politics::DC, oil::Houston. Yes, you can do good work elsewhere, but you’re just not going to find the same support network or the same concentration of the top people in the field.) That would feel a little weird, but it wouldn’t necessarily be a disaster. Anywhere with a wide selection of good Mexican, Chinese, Japanese, Vietnamese, Ethiopian, Afghani and Indian food can’t be all bad.

But that decision’s at least a year or two off. In the mean time, I can at least start paying some attention to tech issues again. At some point maybe I’ll start a second blog [Update: Ye’re lookin’ at it], so as not to bore all of you folks with natter about closures and superpackages and run-time type inference. Till then, well, you’ll just have to put up with the occasional incomprehensible post title — complete with incomprehensible context:

Dear Apple,

How am I supposed to make any progress in my career when you still haven’t ported Java 6 to the Mac?

Sincerely,

A Loyal Customer