FOSDEM, Brussels and New Blog Software

Well, I’m baaaaack….. after a nearly year-long hiatus.  I finally got some new functioning blogging software and this blog is more of a quick test post and a trip report than something substantive.

I went to FOSDEM this year, in Brussels.  Unlike my recent Delta travel to Denmark (for the newly-renamed GOTO conference), my ride on KLM was smooth and easy.  On that Denmark fiasco it took Delta an extra 48 hours to get me home (beyond the expected 20+ just for flying), and Delta comp’d me $300 in funny-money.  Since KLM accepts Delta funny-money the trip to Brussels was also cheap.  I took a direct from SFO to Amsterdam, then a CityHopper from there to Brussels, then the train to downtown, then the tram to my B&B – the impressively named The White House.

The web photos looked nice also and declared The White House as a B&B run out of a turn of the century “mansion” – but the reality was far from it!  The whole “mansion” was about 15 feet wide by 20 feet deep and shared walls with it’s neighbors (this appears to be the common style in Brussels).  My bathroom was down the hall and shared; the “continental breakfast” was mostly a bag of croissants delivered on the first day (of a week long stay).

Brussels is just a classic run-down European city.  It’s not particularly clean or well-marked nor tourist friendly.  The cobblestone sidewalks are badly in need of repaving; there is rust and blowing trash everywhere.  Things look old and in need of repair.  To help with the depressing mood, it rained the whole time I was there with a dreary misty drizzle and the temperature held around 40 degrees.

The conference was held at the University of Brussels – which looks like a collection of 50′s era Soviet buildings: bland, low ceilings, cramped, rusty, in need of paint and better lighting.  The rooms were far too small – the Java session was in a room that held about 75 fixed-placement wooden chairs with embedded folding desktops – like you might see in a old public school in the bad part of town – they were about as comfortable as sitting on a plank.  We routinely turned away 30 to 50 people who couldn’t fit in the room.

FOSDEM itself is not really a Java conference, it’s a “Free Open Source” developer conference, mostly centered around open technologies such as Linux, JBOSS, mySQL, noSQL, Apache, PHP, the LAMP stack, etc.  Java definitely plays a role but it’s secondary in this conference.  I gave a talk on Azul’s Open Source MRI – slides here – which went pretty darned well.  We also had a talk from Mark Reinhold of Oracle about the future of OpenJDK.  Oracle appears to be committed to supporting and improving Java.  Other speakers were definitely majorly upbeat about Java’s future – it remains the most popular language out there, and continues to see a growing programmer population (other languages are also growing, so Java’s relative priority is remaining basically static).  There’s also a lot of growth on languages based on a JVM.

The after-conference beer event was, ahh… interesting.  Europeans like a crowd I guess; the place was packed to insane levels; speech was nearly impossible, and we took turns attempting to reach the bar to bring back beers for the table – getting some took about 15mins of dedicated shoulder shoving.  The beer was incredibly good.  I had too many because I had to keep trying new varieties.  The cherry beer was by far the best, I’ve no idea how to get it in the states.

After FOSDEM I had a few days to play tourist.  I made it out to Luxembourg.  What a difference!  The town is clean and well marked – and a veritable tourists delight.   The whole town was turned into a vast medieval fortress city with soaring stone walls hundreds of feet high, rivers running through the central canyons, huge old stone bridges, dozens of medieval forts and miles and miles of tunnels.  I took a long walking tour, got dozens of great pictures, saw the castles and marveled at the cathedrals and statues.  Even the Sun was shining in Luxembourg.


15 thoughts on “FOSDEM, Brussels and New Blog Software

  1. Hello, glad to see you back and nice to get some info on European cities :D. I started to wonder if you have quit job or something.

    Actually I was wondering if you are to pay a trip to Europe this year, so I might attend some of your talks… mm guess, missed the opportunity has been missed. Either way, try some good German beer next time, the last October fest featured amazing blends.

    I did try to post in the previous blog but the system rejected the answer.
    Here it is (hopefully it will pass through)
    Hi Cliff!

    The post is regarding IWannaBit! paper which I found quite fascinating. I reread it again a few days back and then I was puzzled by some non-scalability issue, I mean if the bit is per the entire L1 cache, it’d become a bottleneck w/ the cache growing and adding cores. I was imagining the cache big as the entire RAM, then the scenario will just stop working.

    There is a proposal in the paper for a bit per a cache line which seems a much robust solution. The cache line bit is set according to the status (CPU) flag, i.e. cache-hit or memory load clears the bit if the CPU flag is not set (and sets it otherwise). CPU flag is set only if eviction/modification occurs on a cache line with an already cleared flag.

    I guess the proposal has a long way to the hardware vendors but it looks to me like an outstanding tool to make the concurrency programming a lot easier.

    Can you clear the matter, please?

    also could not leave a message on the NBHM, so I dropped it on the

    • L1 caches are typically NOT shared, there’s a whole L1 cache per cpu, hence they scale linearly with the # of CPUs.

      • I guess I was quite cryptic. I do understand the L1 is a CPU-private one(or per 2 [or per die?]).

        Speculating that with the size of L1 being relatively large: Would it not happen that highly contented cache-lines, that may be found on many CPUs simultaneously, yet not particular interest of the CPU in question, may keep flagging the entire cache as “dirty”?

        My understanding is that memory fences virtually evict the rest of the (L1) caches for the particular cache-lines being flushed. Thus, depending on amount of the shared but irrelevant cache-lines during a loop of load/process/condition store (w/ regard to that Bit), the iterations may be executed many more times more than needed.

        Thanks again!

        • Actually, L1’s can be relatively small die area; to be small they need to stay fast.

          Highly contended cache lines are by definition “of particular interest” to the CPU in question.

          Fences do not (usually) evict lines, they write-back the dirty stuff. If the CPU is not reading or writing the line, and another one is writing it then the other CPU will trigger an evict – but the line won’t come back (since the CPU isn’t touching it) and thus the line isn’t contended.

          The 1-bit game is there to help places where there is need for multi-line atomicity with low hardware costs. If you have lots of contention, you’re screwed anyways – and you might as well start plain old locking.


  2. Nice to see your blog alive again.

    Maybe you should use a Symbolics Ivory style read barrier. If it was patented I doubt that anyone who still owns the patent knows what it is. It was hardware on the Ivory but could just as well be software since you don’t mind code bloat. The basic realization is that you don’t need fine granularity of oldspace versus newspace, which would force the table of which is which to have thousands of entries and thus have to reside in memory, which forces a load instruction into the read barrier. You could divide the portion of virtual memory used for heaps into just 64 slots, where each slot contains either all oldspace or all newspace, and use one 64-bit register to hold the table. The rest is obvious.

  3. Hi Cliff,

    Good to meet you at FOSDEM.

    To answer your cherry beer question – it’s called kriek and is a type of lambic (which is a different family of beer from either ale or lager). A number of different Belgian producers offer a kriek. Liefmans, Kasteel and Lindemans are the easiest to find in the UK – not sure about the US though.

    Do drop me a mail if you’re going to be in London – it;d be good to catch up for another beer.


    • There is definitely some belgian lambic available here in the US. I’ve had it, and it rocks!

      Also, you should now be getting notification by email when comments are waiting for moderation–like this one…

  4. Hi Cliff,
    sad to hear about your odyssey in Old Europe. But because you liked the same beer like I do I’ll give you a hotel-hint for your next trip (we stayed there for the last two FOSDEMs and it was ok: bathroom in the room, even a real breakfast on some days, in (european) walking distance of the city center and ULB and that all for only 66Euro for the double room per night: Hotel Izan Avenue

    Hope to see you next year again,

  5. Hi Cliff – wondering if you have any thoughts about doing JITing as background threads OS schedule on a cpu vs microcode translation Transmeta (Crusoe) style. Perhaps during Vega design design issues like this came up?

    Regards – banks

    • JIT’ing has been done on background threads for quite some time. Microcode translation assumes you have access to something lower level that the typical instructions, although I suspect HotSpot could JIT to microcode reasonably easily enough.

      Also, C2 (-server) compilations are always done in the background, while C1 (-client) compilations typically are done in the foreground unless they take longer than some small threshold (20ms?).


  6. >> although I suspect HotSpot could JIT to microcode reasonably easily enough.

    was thinking the other way around. route Java bytecode to the processor and have the different compilers implemented entirely in microcode. Mention this because in one of your presentations there was talk of native thread priority getting in the way of timely compilation.

  7. Pingback: Yes, but can I rely on that? | Windows Live space

Comments are closed.