Conference Season!

Ugh; I’ve got too many conferences I’ve been invited to – including several new ones this year.  Here’s the quick rundown (so far! I’ve got a few more pending, including OOPSLA and SPLASH).

Transact 2010
April 13th – http://www-ali.cs.umass.edu/~moss//transact-2010/
On the PC only, so my responsibilities are over for this one!
No time for the trip to Paris.   🙁

Transactional Memory Workshop 2010
April 30th – http://www.cs.purdue.edu/tmw2010/Welcome.html
Slides – Coming as soon as I can arrange

ISMM
June 5-6 – http://www.cs.purdue.edu/ISMM10/
Basically a really awesome GC conference.  On the PC only, but planning on attending.  Co-located with PLDI.

PLDI
June 6-10 – http://www.cs.stanford.edu/pldi10/ http://www.cs.stanford.edu/pldi10/
Premier conference on “Programming Languages, Design and Implementation”, i.e. how to make languages like Java work.  On the PC again, so there are some really good papers in there.   🙂

Uber Conf
June 14-17 – http://uberconf.com/conference/denver/2010/06/home
An industry conference instead of an academic one.  I’m giving a slew of talks.

JavaOne 2010
September 19-23 – http://www.oracle.com/us/javaonedevelop/index.html
Ok, I’ve submitted talks but it’s too soon to see if I’m a speaker.  I am curious to see how Oracle handles JavaOne.  Could be good, could be great, could be … not so good.  One thing I don’t miss about the old JavaOne is paying $2000 for a plain ham sandwich box lunch in the cafe.  Oracle could simply upgrade the food option (and keep all else the same).

JAOO 2010
October 3-8 – http://jaoo.dk/aarhus-2010/

An All-Expense-Paid trip to Denmark!  Which exact talk I give is in-flux, but likely I’ll be able to finally talk about Azul Systems’ newest product!

 

(You where perhaps looking for something technical in a Cliff Click blog?  Next time, I promise!  Right now I’m swamped working on next-gen product… random star wars quote: “stay on target….”)

 

Cliff

 

I’ve Been Slashdot’d

I’ve been Slashdot’d.  The slides in question are also here.


 


I gave the talk at the JVM Language Summit, which itself was a lot of fun.


 


The talk is a repeat of one of the talks I did at JavaOne.  I also gave two other talks, but the Sun JavaOne website appears to be unable to deliver the video right now.  I also gave a short interview at JavaOne


 


One of the talks I mentioned on the InfoQ video is also available 


here as a Google Tech Talk; Java on 1000 cores: Tales of Hardware/Software Co-Design.  I also mentioned a talk Azul’s Experiences with Hardware Transactional Memory, and my blog on that is here.  Alas, I don’t believe the HTM talk has been video’d for public consumption at any time.  If you are interested in HTM support, you should also check out this short gem.  The GC talk alluded too has slides all over the web; here’s the original paper, but I could not find a public video presentation.


 


 


 


 


 


 

Biased Locking

Recently I re-did HotSpot’s internal locking mechanism for Azul’s JVM. The old locking mechanism is approaching 15 years old and features a number of design decisions that are now out-dated:

  1. Recursion counts are kept as a NULL word on the stack for every recursion depth (i.e., counting in Base 1 math) in order to save a few instructions and a few bits of memory. Both are now in vast plentiful supply. On the 1st lock of an object, it’s header is moved into the stack word instead of a NULL and this means that GC or other locking threads (or threads installing a hash code) all need to find and update the header word – which can now be “displaced”. This mechanism is complex, racey and error prone.
  2. The existing mechanism requires a strong memory fence after a Compare-And-Swap (CAS) op, but on most machines the CAS also includes a memory fence. I.e., HotSpot ends up fencing *twice* for each lock acquire, once to CAS the header and again moving the displaced header to the stack. Each memory fence costs about a cache-miss on most X86 CPUs.
  3. The existing mechanism uses “Thin Locks” to optimize for the very common case of a locked object never being contended. New in Java7, +UseBiasedLocking is on by default. This optimizes the common case even more by not using any fencing for locks which have never (yet) changed threads. (See this nice IBM paper on how to do it). The downside in the OpenJDK implementation is that when an object DOES have to change thread-ownership, the cost is so high that Sun has choosen to disable biased locking for whole classes of locks to avoid future thread-ownership-change costs.
  4. When a lock does see contention it “inflates” and then the “inflated” lock is much more expensive than a fast-path “thin lock”. So even the smallest bit of contention will cause a lock to be much more expensive than the good case.
  5. JVM internal locks and locked Java objects use 2 utterly different code bases. This adds a lot of complexity to an already complex system. The two classes of locks are used in slightly different ways and do have different requirements, BUT they both fundamentally implement a fast-path locking protocol over the OS provided locking abstraction. At Azul Systems, we found that these two locking systems have a lot more in common than they do in difference.

 

My new locking implementation has met a number of goals:

  1. No “displaced header”, ever
  2. Only one strong memory fence to lock contended locks instead of 2.
  3. The normal fast-path is as good as the +BiasedLocking case. Revoking a biased-lock is cheap enough to do it on a case-by-case basis.
  4. Inflation happens due to a one-time contention, but then low-contention or no-contention behavior quickly reverts back to a cost which is nearly as good as the normal fast-path. i.e., uncontended inflated locks pay only an extra cache-hitting indirection load.
  5. One code base for all locks

 

As with the OpenJDK, Azul’s new locks are implemented as some bits in every Object’s header word, plus an ObjectMonitor structure when the lock “inflates”. Internal VM locks use a plain Monitor which ObjectMonitor subclasses (actually most VM locks are a subclass of Monitor called Mutex which supports only plain lock actions; Monitors support wait & notify as well).

 

Within the 64-bit object header word (Azul uses only 1 header word instead of 2), we reserve 32 bits for locking & hashcodes. If all 32 bits are zero, the object is unlocked & un-hashed; new objects appear this way. If the low bit is set, the remaining 31 bits are a hashCode. If the low 2 bits are ’00’ the object is BiasedLocked to a thread, and the remaining 30 bits are the thread ID of owning thread (we can compute the thread ID of a thread in 1 or 2 clock cycles). If the low 2 bits are ’10’ the remaining 30 bits are the address of an ObjectMonitor structure; e.g. the low 32 bits of the header contains a C++ pointer to an ObjectMonitor, plus 2.

 

The low 32-bit patterns of the object header are:

00000000 – The most common case is that on object is never locked and never hashed; it’s low 32bits of header remains zero throughout it’s lifetime.

 

xxTIDx00 – The next most common case is that the object is locked but not hashed, and never changes owners. Many many Strings fall into this category. The first lock acquire will CAS the owning thread into the header, and the object is now Biased-Locked. Future lock attempts will simply do a load/compare/branch to confirm that the object remains biased-locked to the owning thread. Literally the code to do this is:

  // R01 - holds object 
  // R02 - holds self-thread-ID 
  ldu4 R03,[R01+4] // load low 32-bits
  bne R03,R02,slow-path-locking // thread-id bits do not match?
  // bits match, so we own the lock!

xxHASHx01 – The next case is that the object is hashed but not locked, and we are trying to get the hashCode:

  // R01 - holds object
  ldu4 R02,[R01+4]    // load low 32-bits
  beq  R02,0,not_hash // zero header so no hash
  shr4 R02,1,   // low-bit to carry flag
  bcc  not_hash // "branch carry clear", if low bit was zero it was not_hash
  // R02 - holds 31 bits of hash

 

xxMONx10 – The next case is that the object requires an ObjectMonitor, either because it is both locked AND hashed, or because it was biased-locked once and saw contention. The monitor is a 32-bit pointer (so limited to the low 4Gig), but can be directly used. This snippet of code assumes we already failed the fastest path lock-acquire given above (the snippet is actually a called subroutine so the code size does not really matter):

  // R01 - holds object 
  // R02 - holds self-thread-ID 
  // R03 - holds loaded header word which might be a monitor: slow-path-locking: 
  extract R04,R03,2 // test bit#1 for being set 
  bne R04,1,not_monitor // branch if we need to inflate the lock 
  // Here we have code to check for self-biased in the monitor: 
  ldu4 R04,[R03-2+owner_field_offset] // load ObjectMonitor.owner 
  beq R04,R02,lock_is_owned // thread-id bits match?
  // and here we carry on with slow-path locking
  // including testing for recursive locks and 
  // attempting CAS acquire, short spin-waiting, etc

 

And so on. If all the various fast-path attempts fail we eventually end up calling into the VM and blocking on the OS-provided mutex. Along the way we’ll attempt a bunch of other tests (e.g. recursion & spin-waiting) and if all things fail we need to block, we’ll also do a fast/short crawl of the call stack for profiling. There are a lot of other interesting issues here, including heuristics on when to make a lock not-biased by default (producer/consumer design patterns make objects which *always* change hands at least once), or when to deflate an inflated lock that has “settled” into a new thread (or unlocked state), or how to achieve a modicum of fairness under extreme contention. Since this blog is already getting long (and as raw Assembly code ta-boot!) I’ll stick with a short discussion of the bias-lock revoke mechanism.

 

Revoking a Biased Lock

Suppose thread T1 holds the lock on Object O – via the very fast-path mechanism: O’s header word contains the bits “xxT1xx00”, and that thread T2 needs to acquire the lock on O. What kind of interactions are possible? For starters, assume T2 has failed the fast-path attempts (it’s got the wrong thread ID!) and already entered the slower-paths in the VM. A simple inspection shows T2 that O is biased-locked to T1. What does T1 know about this situation? Turns out that thread T1 knows almost nothing:

  • – T1 may have acquired O’s lock in the misty past and may not now have access to a pointer to O.
  • – T1 may be rapidly acquiring and releasing the lock on O, all without fencing or atomic operations. Moment by moment, cycle by cycle, the actual Java state for T1 may have it holding & releasing O’s lock. T1 isn’t really tracking this, he is just periodically observing that he holds the bias lock on O.
  • – T1 may be blocked in long latency I/O operations or on other locks.
  • – T1 may be a dead exited thread.
  • – The id for thread T1 may have been recycled to a new thread, T1′, which has never seen a pointer to O cross it’s JVM state, ever. Yet the lock for O is now biased to T1′!

 

What T2 can do is to ask T1 to reach a safepoint – a point in the code which matches some Java bytecode, and then decide if T1 really holds the lock on O or not. T2 does this by setting a self-service task on T1; T1 periodically polls for such tasks at safepoints, and when it sees the request it breaks out of it’s normal execution and handles the task (periodic polling typically 1 or 2 cycles every few thousand instructions). T2 then waits for T1 to “do something” with the lock (but not really: what if thread T1 has already exited?).

 

Suppose T1 is still running and spots the poll request. It then does a self-stack-crawl to count the number of times it’s supposed to hold the lock. JIT’d code includes annotations to describe what locks are held (and how often, if recursive), and the interpreter counts lock acquires in any case (with slow simple Base-1 counting but we don’t care about speed in the interpreter). If this lock count is positive (T1 really holds the lock now!) T1 inflates the lock, slaps in the recursion count into the new ObjectMonitor struct, goes back to running… but now each unlock by T1 will lower the recursion count. When the lock is unlocked for the last time, T1 does the usual wakeup on T2 and T2 then competes to grab the now-free lock. If this lock count is zero (T1 holds the lock biased but not for-real) then it releases the lock and notifies T2 as normal. In any case, the bias on O is revoked and the lock reverts to a state where T2 is allowed to compete for ownership.

 

This is all great if T1 is actively running and polling for self-service tasks, but what if T1 is blocked or exited? After T2 prods T1 with the poll request, T2 then attempts to do the very same self-service task on T1’s stack remotely. To do this T2 has to acquire the “lock” on T1’s stack. All Azul threads unlock their stack whenever they block (or otherwise stop polling) – this same stack-lock mechanism is used by GC threads. If T2 can get T1’s stack-lock, then T2 can safely crawl T1’s stack (and T1 is not allowed to execute random Java code until he gets his own stack-lock back). If T2 canNOT get T1’s stack-lock, it must be because T1 is busy executing Java code – and hence T1 is also polling for self-service tasks and will (shortly) get around to crawling his own stack. In any case either T1 or T2 will shortly get around to crawling T1’s stack and discovering the proper lock recursion count for Object O.

 

And if T1 is exited, T2 can also detect this from T1’s old thread-id (thread-id’s map back to some type-stable per-thread data). In this case, T2 can just freely revoke the bias on O and bias O to himself.

 

Well, that’s enough for now! Hope you enjoyed this (very long overly complex) discussion of our biased-locking implementation.

Cliff

 

Touching Base

It’s been awhile since I blogged, so I thought I’d touch base with people to let them know what’s been going on. Azul Systems has been hard at work improving our JVM. This is a bigger statement than it sounds – there are not many groups that have a large enough ‘quorum’ of JVM engineers to do large-scale changes to the HotSpot JVM. Azul has nearly a dozen engineers doing core HotSpot work (not counting JDK work or QA folks – counting only core JVM engineers)! We’ve been doing large-scale changes to HotSpot for nearly 8 years now. Our HotSpot has been improved over Sun’s standard HotSpot or the OpenJDK in a large number of ways, some more visible and some less so.

 

Some of the more obvious stuff we’ve got working:

 

  • A new complete replacement GC: Generational Pauseless GC (and the older PauselessGC paper is here). This is one of our core strengths. GPGC handles heaps from 60Megabytes to 600Gigabytes and allocation rates from 4Megabytes/sec to 40Gigabytes/sec, with MAX pause-times consistently down in the 10-20msec range.GPGC requires read barriers, and this means instrumenting every read from the garbage-collected heap. Instrumenting the JIT’d reads is easy: we altered the JITs long ago to emit the needed instructions. Instrumenting the VM itself is a bigger job; every time we integrate a new source drop from Sun we have to find all the new heap-reads Sun has inserted into their new C++ code (HotSpot itself is a large complex C++ program) and add read-barriers to them.
     
  • Real Time Performance Monitoring – RTPM. This is our high-resolution always-on no-overhead integrated profiling tool and is our 2nd major selling point. Because it’s no-overhead (literally less than 1%; it’s very hard to measure the overhead) we leave it always on. This means you can look at a JVM that’s been up in production for a week or a month and introspect it. It’s *common* for a 1hr session with RTPM to answer performance questions that have plagued production systems for years, or to have people walk away with 10-line fixes worth 30% speedups. It’s as-if you’ve been blind to what your JVM has been doing and suddenly your eyes are opened. Live stack traces, heap contents, leaks, hot-locks with contending stack traces, profiled JIT’d assembly, I/O bottlenecks, GC issues, etc, etc. See the link for a demo.
     
  • Virtualized JVM – We can take pretty much any old server, install a new JDK, change JAVA_HOME to the new JDK and re-launch the application… and it now runs on Azul’s JVM backed by an Azul appliance. No hardware change and no OS change. This is a great solution for in-place speedups of older gear. More recently of course, we’ve been hard at working porting our JVM to our new hardware platform. This work is going well; look for more discussion here as we have things to announce!

 

Here’s some of the LESS obvious stuff we have working:

 

  • Tiered Compilation. Despite the fact that Sun has shipped “-client” and “-server” configurations for years, they never integrated these two JITs into a single system. Most other JVMs have had a tiered compilation configuration for years and Azul Systems did this to HotSpot a few years ago. We consistently see a roughly 15% speed improvement over a plain “-server” configuration. We use the “-client” JIT (also known internally as C1) to do fast high-resolution profiling; this high-quality profile information allows the “-server” JIT (C2) to do a much better job of inlining and compiling.
     
  • A complete replacement for the existing HotSpot CodeCache: the holder of all JIT’d code in the system. While *adding* code has always been easy, *removing* code has always been tricky (well, tricky to do it without blowing all code away at once and without requiring all calls to indirect through a ‘handle’). Most large server apps slowly churn new code, so if you leak code you eventually run out of memory. The new CodeCache uses GC to control code lifetimes and this results in a vastly simpler and less buggy structure all around. We also use GC to manage all the auxiliary data structures surrounding code, e.g. the list of “class dependencies” for a piece of JIT’d code is a standard heap object now. (A “class dependency” lists the set of classes & methods that a piece of JIT’d code assumes are NOT overridden; if a new class and/or method overrides one of these then some inlining decision made by the JIT is now illegal and the JIT’d code needs to be deoptimized, removed & recompiled). Besides being a common management point for all code, the CodeCache is pinned in the low 4-Gig. This means all hardware Program Counters can be limited to 32bits (in our otherwise 64-bit system) and this is a tidy cost savings (shorter instruction sequences for calls; less I-cache space consumed, etc).
     
  • Tons of internal JVM scaling work. We run on systems with 100’s of CPUs and so we’ve found (and fixed!) any number of internal JVM scaling limitations. GPGC can run with hundreds of worker CPUs if needed. The JITs compile in parallel with dozens of CPUs (50 is common during a large application startup). Many internal VM structures have been made lock-free or have had their lock hold-times reduced by 10x or more. Self-tuning auto-sizing JIT/compiler thread pool. Concurrent stub/native-wrapper generation. Concurrent code-dependency insertion (during compilation) and checking (during class loading). Self-tuning finalizer work queues. etc, etc, etc….
     
  • Cooperative Safepointing allows thousands of *running* threads (not just alive-but-blocked-on-IO) to come to a Safepoint in under a millisecond. Merely safepointing 100’s of threads is down in the microseconds. Note that a full-on Safepoint does not happen until the last thread checks-in but the stall time starts when the first thread stops for a Safepoint. The time-to-safepoint pause is measured from when the first running thread stops till when the last thread checks-in.
     
  • The ability to asynchronously stop & signal individual threads, to have them do various self-service tasks cheaper than a remote thread can do it. This includes, e.g. stack crawls for GC or profiling (a thread’s stack is hot in his own L1 cache and can be crawled vastly faster than by a remote thread), or to acknowledge GC phase shifts or to allow code to be deoptimized (jargon word for what happens to code that is no longer valid due to class loading). We can also efficiently do “ragged safepoints” – this is like a full Safepoint except we don’t need to simultaneously stop all threads. Instead we merely need to know when all threads have acknowledged e.g. a GC phase shift. The threads “check in” as they individually acknowledge the Safepoint and keep on running. When the last thread has checked in, the “ragged safepoint” (and GC phase shift) is complete.
     
  • No more “perm-gen” space to run out or require a separate tuning flag. No more old-gen or young-gen either. No GC-thread-count knobs, or space/ratio tuning knobs or GC age or SurvivorXXX flags. GPGC takes no flags (except max total resources allowed), and runs well. There Is Only One Heap Space, and GPGC Rules It All.
     
  • A new thread & stack layout that lets us use the stack-pointer also as a ThreadLocal storage pointer, the HotSpot “JavaThread*”, AND as a small dense integer thread-id (requires 1 or 2 integer ops to flip between these forms). This frees up a CPU register for general use, while still allowing 1-cycle access to performance critical thread-local structures.
     
  • A complete replacement for the existing HotSpot locking mechanisms. Our new locks are ‘biased’ (here’s the original paper idea) similar in theory to Sun’s +BiasedLocking but based on entirely new code. No more “displaced header” madness (this comment is probably only relevant to hard-core HotSpot engineers). Biased locks do not require ANY atomic operation or memory barrier during locking & unlocking, unless the lock needs to “change hands”. Since we can stop individual threads asynchronously, we have a fairly cheap way to hand biased locks off between threads. Once individual locks demonstrate that they need to “change hands”, we inflate that one lock (not the whole class of locks) and it becomes a “thin lock” as long as the contention is low enough switching over to a “thick lock” only when there are threads waiting to acquire the lock.

    The issues here are fairly complex and subtle and deserve an entire ‘nother blog! That’s enough for this Blog. More later…

Java vs. C Performance….Again.

I just foolishly got caught in a You-Tube discussion on Java vs C performance.  Foolish because You-Tube comments are a lousy way to present anything and because it’s hard to keep the level of discourse scholarly.  And foolish especially for me because I’ve had this discussion so many times and it always comes out the same way… so here’s my attempt at distilling my arguments into something I can point people the *next* time I get caught in this silly discussion.

 

 

Is Java faster than C/C++?  The short answer is: it depends.

Places where C/C++ beats Java for obvious reasons:

  • Very small footprint is required (these days that does not include most phones).  You can get JVMs that run well in a few hundred KB.  Sometimes that’s too much.
  • Very small startup time (as opposed to very low response time on a well-warmed-up JVM).  Things like pacemakers (if mine takes a cosmic-ray-induced reboot, I want it restarted pretty darned quick!), or perhaps military gear (e.g. guided missiles).  Note that this does not include e.g. long running hard-real-time airplane control; I know that at least one UAV uses Java as it’s primary control mechanism.  Startup of a JVM in microseconds is a very hard problem; startup in milliseconds might be vaguely possible; but a more common time-frame for a small program to get JIT’d is measured in a few seconds.  Once you are past the profiling & JIT’ing stage micro-second response times are entirely doable.  Flash games beat Java games mostly because it took 30+sec to load the JVM from disk… and so now the web-game developer community has settled on Flash as the standard (and it still takes 10+sec to load the JVM). 
    [BJ81: I DO care if my IDE takes 10 seconds to start as opposed to 2. I DO care if my word processor or computer game of choice takes 10 seconds to start as opposed to 2. Startup speed is an important component of the user experience for all end-user software.] 
    [Lots of other people complain about loading time]
    My IDE stays up for days…and all my computer games take more than a minute to load already.   But yes, I like faster loading.  This mostly depends on things like disk speed… and the implementation of Java as a large (on disk) JVM and not a lot on things like the actual language or JIT’ing. 
    [SS: you can try JetBrains IDEA. From my experience it’s faster than Eclipse, less footprint, no lockups on GC. It’s Swing 🙂 The only problem it’s not free. The real problem with perceived Java performance is that none seriously optimized client Java performance before the most recent time.]
    Also my experience with JetBrains.  It’s amazingly fast. 
  • Optimizations beyond “gcc -O2”, such as tiling & blocking for registers or cache.  These optimizations pay off handsomely for dense linear algebra codes but no production JVM that I know of does them (some research JVMs back onto common heavy-weight optimizers which include these optimizations).  Auto-parallelizing compilers also fall into this realm.
    [DB: One article that may be of interest to you regarding Java’s transcendental performance: http://blogs.sun.com/jag/entry/transcendental_meditation The basic message: programs that extensivelly use transcendentals (sin/cos) will experience notably slower performance in Java due to the “correct vs. fast” implementation.]
  • Value Types, such as a ‘Complex’ type require a full object in Java.  This has both code speed and memory overheads.  Note that there are theoretical optimizations for both (Object Inlining) but implementations available in production JVMs are very weak right now.  See my comment below about rotating arrays-of-small-structs 90-degrees to a small-struct-of-arrays as a workaround.
  • Direct machine access, as is common in embedded systems.  Memory mapped I/O, etc.  Note that JNI goes some of the way to addressing this problem but JNI is clunky to use and definitely adds some overhead crossing the C/Java boundary.
    [FR:Java only supports one floating point rounding mode, so it’s not suitable for scientific applications. This might fall under “direct machine access” but FP rounding modes are really machine-independent because the IEEE standard requires them. “How Java’s Floating-Point Hurts Everyone Everywhere”http://www.eecs.berkeley.edu/~wkahan/JAVAhurt.pdf]
    I’m not so sure how Everyone got hurt: working the simpler subset of FP allowed Java the time to get everything else right.  Had JVM’s been forced to implement the whole spec well, we not never have gotten the JVMs we have now.
  • Interpreters.  Interpreters written in pure Java or pure C appear to be mostly equal (I’ve got very few cases where both are written in a pure style), but it’s easier to cross over into the non-pure realm and get 2x speedups from C.  gcc label vars are the obvious use-case here, getting a 2x speedup over pure C in my experience.  Dipping into pure assembly gets me another 2x speedup.  Java’s “Unsafe” classes allow access to e.g. “compare-and-swap” instructions plus unchecked ‘peek’ and ‘poke’ but do not support code-gen or hand-made control flow very well. 
  • On the fly code-gen-&-execute.  You can make bytecodes in Java & and execute them but it’s somewhat more difficult than the same machine instructions on a simple RISC chip… and you need the JIT to do a good job with your bytecodes (there are decent libraries to make it easier but it’s still harder than just doing some bit-twiddly thing and slamming bits into a buffer).  Sometimes what you want is to gen some specific machine instructions that do not resemble bytecodes (see the above comments about direct machine access) or to have fine-grained interleaving between the gen’d code and the static runtime bits (I’ve done sort routines which gen’d the key-compare sequence from a command-line description before sorting and called that from the inner sort loop).
  • OS’s.  These need lots of direct machine access (e.g. hardware interrupt support; page table support), but they also dork with standard execution stacks (like interpreters do) and also load code and execute it (see above comment about making & executing code).  Yes there have been some brave attempts at a pure Java OS, and Microsoft has had a similar research project in this area for a long time which has made interesting inroads into the problem.  So this might be a ‘solved’ problem some day.
  • “Direct Code Gen from C”… carefully hand-crafted C code, such that the author knows (and plans for) which exact machine instructions will be generated from the C compiler.  This is harder to do in Java because the translation layer is much more indirect (I’ve done it successfully, but I know a *lot* about the JIT).  Examples are things like kernels of crypto loops, or audio/video codecs or gzip/compression routines.  Small bits of very hot code with complex and unusual control flow where the author knows a lot more about what’s going on in the code than the compiler does.  This kind of coding obviously does not scale to the 100KLOC program but works very nicely for things that are both very hot and compartmentalize into libraries well.

 

[MB: Any tool has its own advantages and disadvantages. You simply cannot say Java is superior to C/C++ because Java is written with C/C++ ;-).]

Actually there are several Java-in-Java’s Out There and some are not too far off HotSpot’s mark.

[FG:  I think you’re missing two important issues with Java. One is memory consumption. Java programs use at least twice as much memory as C++ programs because pointers (called references in Java) are used for everything and all strings are UTF-16.]

[AM: Java requires 2.5 times the memory a C++ program requires.]

Yes and no…  Yes in general Java uses more memory than C++ but it’s usually more due to coding styles and data structure choices than pointers-vs-primitives and fat strings.  Whenever I see the 2x data bloat it’s always due to *really bad* data structure choices.  With reasonable choices the bloat is far far lower… and memory is cheap.  I’d really like to see some hard evidence here: really equivalent implementations of larger apps in both C & Java.

[FG: It gets even worse on 64 bit CPUs. For some applications like in memory databases, in memory data analysis, etc, this leads to an orders of magnitude performance difference because the disk has to be touched more frequently.]

I assume you mean 64-bit JVMs and not just 64-bit CPUs…

I guess I’d argue that you just mentioned a strong point of Java: with 64 bit JVMs you can have heaps of 100’s of Gigs of ram (and ram is cheap!) and cache all those DB accesses and touch disk less often.  Azul Systems routinely sells gear to do exactly that.  To be fair, the GCs on X86 JVMs do not handle such large heaps well.  Some Day they will, and then you’ll be delighted to have a 64-bit JVM, pay $49.99 for a few hundred Gigs of ram and skip the disk.

[FG: The second issue is cache locality. Obviously using only pointersinstead of values in all collections (other than arrays of primitives) leads to a lot more cache misses.]

64-bit VMs with 64-bit pointers get pointer-size bloat, and that appeared to cost Sparc about 15% in performance. X86-64 picks up double the number of GPRs which offsets the cache footprint lossage somewhat.

[IJ: Hey Cliff, Regarding the comments about memory usage on 64-bit JVMs, compressed references can help for heaps smaller than 32GB on HotSpot.]

[FG: So I would say C++ is a better choice for any application that benefits from keeping a lot of data in memory. By the way, C# is vastly better than Java for such applications due to its value types.]

Yes and no again…  For many applications you’ll pay only 1 more cache-miss per entire array.  For arrays-of-small-structs (e.g. arrays of Complex), you are correct: Java’s lousy there (and I added that to C’s strengths).  When doing performance sensitive arrays-of-small-structs I turn the implementation 90-degrees and implement a small-struct-of-arrays.  It’s clearly a work-around over a clunky language issue… but the performance is solidly there (both in memory footprint and in access speed).

Places where Java beats C/C++ for obvious reasons:

  • For Most Programs Runtime profiling is able to pay off well.  This is true ofmost large Java apps; one obvious case is where there’s a generic interface but at runtime only one implementation is ever used: after profiling the JVM can turn the interface call into a direct call and even inline the call.  It’s well known in the C/C++ world that peak benchmark scores come from using the profiling passes these compilers support, but it’s pain to use them in practice… so probably 99% of all C/C++ programs compile without the benefit of profiling.  In practice, all Java programs are well profiled and this profiling data allows for better code generation – sometimes *much* better code generation.
  • Very large programs (1MLOC +).  This is totally anecdotal on my part but my not-very-rigorous survey of the industry tells me that the Java large-program tool-chain (and language features like GC) are more robust and complete than the C equivalents and this allows teams to write larger programs quicker than they could in C/C++.  Yes large programs are written in C/C++, and yes they get the memory usage “right enough” that the programs run usefully well… but the same program written in Java appears to come together quicker, with fewer bugs and a shorter overall development cycle.  I see a *lot*more 1MLOC Java programs than C ones and it isn’t because Java programmers write fluffier code (which might also be true…) these large Java programs are really doing a “bigger” job than what can be squeezed into 100KLOC of C code.  In this case “Java beats C/C++” really means: we can’t afford to build these systems in C/C++ but we can in Java… so there isn’t any C/C++ equivalent.  Where’s the C/C++ version of WebSphere or WebLogic?  Maybe somebody Out There can tell me the state of C/C++ Application Servers…
    Got a comment that similar functionality comes from a bunch of separate cooperating C processes.  Not sure I believe that, as I haven’t seen anything close to the level of integration e.g. WebLogic has in the C world.
  • Garbage Collection is just easier to use than malloc/free.  This is well documented in the industry, and yes it’s not entirely “free”.  Yes the heap needs to be managed in production environments, leaks are still an issue, GC pause times are an issue, etc, etc… but overall it’s vastly quicker to write using GC and in the time saved make the program more performant or resilient to GC pause issues and you’ll come out far ahead.  (I’m blithely ignoring all the C/C++ hand-rolled memory management techniques like “arenas” or “resource areas”; these fall into the category of “malloc/free is so hard to use so we rolled our own poor-mans GC but if the language had GC we would probably have never bothered”).
  • GC allows for entirely different algorithms, isn’t just easier to use than malloc/free.  Many concurrent algorithms are very easy to write with a GC and totally hard (to down right impossible) using explicit free.  Reference counting is commonly used in these situations in C/C++ but it adds a lot more overhead (sometimes a *lot* more overhead) and is much harder to get right: e.g. a common mistake is keeping the count in the structure being guarded.
  • Very Good Multi-Threading Support.  Parallel programming is just easier in Java.  
    [SS: Next dimension of Java performance is the easier access to parallelism. Using j.u.concurrent I can beat C/C++ most of the time just using more processors in my 16 core server. Concurrent memory management by hand is a pain in the ass and using GC kills the point of using C/C++]

 

Places where C/C++ proponents claim C beats Java, but it doesn’t appear (to me) to do so:

  • Most ‘plain’ modest sized program.  This will be programs requiring no more than the “usual” compiler optimizations and are not so tightly constrained by machine size or startup time.  Examples might be things from simple compute-bound loops (string hash, compress) up to IDEs & editors (and most visual tools); DB cache/drivers, etc.
    [SS: For the example where Java beats C/C++ a number of times visit www.caucho.org and their OSS PHP engine Quercus. You can check the numbers yourself using my http://code.google.com/p/wikimark/ For the example of super-fast memory DB in Java visit http://www.h2database.com it beats MySQL both in performance and footprint 🙂 Of course it’s specifically tuned for in-memory use. And Java is just an example of so called Managed Runtime (Microsoft term). If we are talking about Java performance we are mostly considering JITed code performance. But JIT can be effectively applied to C/C++ as well, see http://llvm.org Apple Mac OpenGL implementation is based on LLVM and all OpenCL implementations too. So anyone running their games on Mac or planing to use OpenCL will use the JIT. ]
  • 16-bit chars vs 8-bit chars.  It’s true that ‘char’ in Java is similar to C’s ‘unsigned short’, but ‘byte’ exists and is similar to C’s ‘signed char’.  This confusion appears to confuse some number of new C-to-Java converts, but many of the old C tricks for dealing with different sized data types work great in Java and the JIT is very good at bit-fiddly code.  There are generally lots of Strings in big Java programs and this leads to code bloat, but if you deal with high-volume String data and it’s all ASCII (no internationalization!) then converting the data to byte arrays makes sense.
    [MI: Internally, bytes are stored as 16 bit integers but cast down to only 8 bits.]
    No, ‘bytes’ are stored in memory as 8 bits always, ‘chars’ and ‘shorts’ and 16 bits.  CPU register contents always depend on the JIT’ing of the moment – for both C and Java.
    [MI: it’s necessary to add a check for the state of the byte and a math operation to correct it prior to every byte transformation, and a check for the state afterwards.]
    Just mask the results (no “testing” or control flow), or use byte-typed variables.
  • Raw Small Benchmark Speed – This is actually mostly a non-issue, as Real Programs rarely look like these things, nor run for <1sec, nor have all their time spent in 3 lines of uber hot code, etc… But Java still looks fairly good here, despite the general static-compilation bias built in to tiny short running programs:
    [SS: First about Java performance. Java is the second fastest mainstream language after C/C++ in the Benchmark game, see http://sho otout.alioth.debian.org/ And it’s less than 2 times slower than C/C++ in this mostly numerical benchmark. It could also be notice that JRuby is faster than native Ruby. Latest versions of Sun JVM efficiently use SSE and are comparable with C in FP performance. http://math.nist.gov/scimark2/index.html]
  • First-Person “Shooter” PC Games [Retracted: I’ve gotten several well-written posts explaining how games have changed over the past twenty years.  :-)] using the same game card & interfaces (e.g. DirectX).  Most of the heavy lifting is done by the game card itself; the game engine is more about managing other resources and running the game state & AI’s… all of which seems to me to work nicely in Java.  I’ve met at least one person working this approach for-real (and it was *working* for real, but I haven’t kept in touch so I don’t know how it came out in the end).   

[AD: Given that most games use ‘Optimizations beyond “gcc -O2″‘, and tend to have an interpreter or scripting language, and often require ‘Direct machine access’, plus plenty of tricky computationally intensive maths, that would put them squarely into the world of C++. Especially any game designed for a console, or handheld device.]

For PCs: I’m still thinking Java is up to the task.  I’ve yet to see games that needed BLAS-style routines; simple ‘saxpy’ style loops should come really close to C performance (not heavily tested!  But I’ve routinely talked people into testing Java FP performance and routinely had them come back with a positive report).  If ‘direct machine access’ is limited to a handful of graphics-card calls per frame (so hundreds to thousands per second), then JNI can handle that no-problem.  The games I worked on long ago didn’t use any scripting; back then we would have used Java instead of scripting, so I don’t have a feel for how crucial scripting is to game development.

For consoles & handhelds perhaps; I did games on console like devices a long time ago (20+ years) and my vague recollection is that if a modern JVM could be squeezed into such a device it would be able to do the job. I weakened my assertion to just PC games.

[TNT: A surprisingly large part of a game is performance sensitive and requires C code. Many games are CPU bound (not GPU bound as you suggest).

Ahhh, but exactly what I am showing here is you cannot equate ‘performance sensitive’ and ‘requires C code’.  Java is up to speed for most (if not all!) of the performance sensitive parts.

[AJ: AAA games now typically use middleware or custom physics engines that have highly-tune collision detection code (also used by game “AI”) and custom nonlinear solvers. Both are often SIMDed, prefetched and cache blocked, with the CD sometimes doing bit-fiddly decompression, and the solvers sometimes using custom code gen (for derivatives and such).]

JVM: SIMDed: 1/2 yes, prefetched: yes, cache blocked: no.  Perhaps closer than you think but probably not close enough.

[AJ: Some PC games push 5-10K draw calls per frame, with very roughly 4 additional state-setting calls per draw. So @60fps, that’s something like 1.5-3M/sec. Games are often single-thread limited on this alone.]

The obvious fix is to batch the graphics calls per JNI call, but this starts to look like a hybrid C/Java solution and those rarely look pretty.

[AJ: PC games are also chew plenty of CPU with real-time decompression (unzip or custom) and sometimes recompression (say JPG->DXTC).]

Azul Systems can’t use the X86 ZIP routines so we went with the Java ones: performance was about as good as ‘gcc -O2’ and it was easy to parallelize.  After parallelization the Java version was as fast as we cared to add CPUs.

[JJJK:  As for Java and games, it actually works better than most people think. No problem for indie games. But there are some issues:

GC pauses ruin everything. Smooth framerates are difficult to achieve with the usual Fully-OO-Java-Programming-Style. Also the resources on the GPU are not garbage collected, so you’ll have some kind of paradigm-clash anyway.

If you want do do some transformations on vertex arrays on the CPU, you’ll have to do them on direct byte buffers, since they are the only arrays that can be sent to the GPU. Or do them on java arrays and copy them into byte buffers, I have no idea what that does to performance though.

3D-Vector math in java is plain ugly. You can either make it readable or fast. And if you don’t pay attention, it will be neither.

On the other hand, with more data and computation going to the GPU (and staying there for the most part), Java is at least becoming moderately attractive for games. I worked in a company which is now starting to release web-based 3D java games.]

“Flaws” in most Java-vs-C performance methodologies:

These are ways in which many many people (wrongly) claim Java/C/Ruby/etc is faster than C/Java/Python/etc.  Sometimes these issues aren’t flaws at all, but instead point out conflicting basic requirements that truly make one language superior to another for a particular task.

  • Warmup.  Sometimes no-warmup is important (see comments above about pacemakers), but more often a short warmup period is irrelevant to the overall application.  If I’m using an IDE, I expect a largish loading period… but then I’m using the IDE all day. I don’t use an IDE for 0.1 sec.  If warmup is NOT important to the application, then allow the JVM a warmup period before comparing performance numbers.  Many of the benchmarks in the language shootout athttp://shootout.alioth.debian.org fall into this camp: too short to measure something interesting.  Nearly all complete in 1 minute or less.  A very large set of Java programs (and programmers) write server programs that run for days and a combination of throughput and quick response under load are the key measures.
  • Not comparing the same overall algorithm.  This is common for larger programs where exact Java & C equivalents do not exist… sometimes one version or the other gets saddled with a really bad implementation.  And sometimes you just can’t do it but people try to “fake it” with a straw-man for the “other” side.  E.g. any-GC’d-language doing a bunch of concurrent algorithms versus non-GC’d language, or direct code-gen especially of unusual machine instructions (e.g. X86 codecs using MMX).  Again the language shootout suffers this problem… and so all results have to be carefully inspected.
  • Nyquist sampling or low-res sampling errors.  e.g. using a milli-second resolution clock when reporting times in the milli-second range.  Both Java & C have common timer APIs reporting times below the millisecond (micro & nanos), but actual real hardware & common OS’s vary widely with what they implement. 
  • Broken statistics.  This is a hard problem in Java and easy to get wrong for subtle reasons, but people get it wrong in C/C++ and other languages also.  Running anything *once* suffers from a huge variety of startup issues.  Re-running in the same program gets you past one issue and into another: the JVM/JIT “compile plan” will vary from run-to-run.  Within a single run you might repeatedly get “X frobs/sec” (say for 10 runs in a row in the same JVM launch) but if you restart the JVM you can easily see “Y frobs/sec” reliably (repeated 10 times in a row in the same JVM) one-in-10 runs.  This kind of variation can only be managed with proper statistics. See “Statistically rigorous Java performance evaluation.
  • Flags, version numbers, environmental factors all matter: “java” is not the same as “java -client” or “java -server”, and might make a 10x difference one way or another.  Same as using “gcc -O1” vs Intel’s reference compiler set at the max “-O” flag.  Linking your C program as “ld b.o a.o” can give a 30% difference from linking as “ld *.o” (link order affects I-cache layout).  Environment variable sizes (i.e., length of your username) can push the initial stack positions up or down to where the stack collides in the cache or not, etc.  See “Producing wrong data without doing anything obviously wrong!“. Again, well reported numbers and good statistics are your friend here.
  • Varying dataset sizes: I’ve seen test harnesses that re-sized the dataset to keep the runtimes approximately equal to 1 second, but once the dataset no longer fit in cache performance dropped by 10x.
  • Claiming ‘X’ but testing ‘Y’:  examples might be SpecJVM98 209_db claiming to be an in-memory DB test, but really it’s a bad string-sort test, or writing a single-threaded program to test OS locking speed (with JVMs, uncontended locks will never hit the OS) or the Caffeinemark logic test (with a little loop unrolling the code is entirely dead).  See more examples here. Modern larger benchmarks do fairly well here but many microbenchmarks run afoul of this.

[SG: One thing I might suggest is dropping your local memory caches before performing. Java gets a obvious speed boost beacsuse the JVM code tends to be cached, and the C code also has the same.

sync && echo 3 > /proc/sys/vm/drop_caches

Then run each program twice. First one giving you the time it takes to access and load the JVM and other libraries included, and the second giving you what it takes once the libraries and everything is cached.]

No.  Run at least *30* times to get a decent statistical regression.  See the above papers on the ‘run it twice’ methodology is *seriously* flawed.  However dropping the local memory caches probably helps.  Here’s a nice writeup on Java perf testing issues: http://www.ibm.com/developerworks/java/library/j-benchmark1.html.

 

 

Some Commonly Mentioned Non-Issues:

  • [AM: C++ stack allocation beats Java GC for allocation of small objects.]

    Does not.  You have evidence?  I have evidence that small object allocation in Java is darned near free… but not totally free.  So C++ might win here but not by any interesting amount.  And HotSpot does stack allocation when it can.  You should do some testing here to convince yourself.

  • [AM: Java apps have lots of dynamic casts in them.]

    Yes, and it’s free.  Really 90% of all casts are removed by the JIT and the other 10% take a ‘load/compare/branch’ sequence which is entirely predicted by X86 hardware and runs in 1 clock cycle.

    [EXO: This is pretty off topic, but I’d like to know how you’ve reasoned that a load/compare/branch sequence can be done in 1-cycle on x86. Of course this varies by implementation, and I’m sure that there are x86’s that can handle the compare and branch in parallel despite having a dependency chain on the flags, probably due to careful misalignment of the relevant pipeline stages. But there’s no way you can cmp on a loaded value the same cycle it’s loaded in. The L1 dcache’s 2 or (more likely) 3 cycle latency is going to get in your way. Sure, the pipeline can be busy doing other independent things in the meantime, but that’s always the case and I don’t think it’s what you were getting at.]

    You’re close: indeed the cmp happens many cycles after the load.  Meanwhile the branch predicts – and correctly >>99.99% of the time- and X86 O-O-O dispatch carries on past the branch.  As long as the load isn’t a full miss to memory the whole ld/cmp/jne sequence will retire long before it causes the X86 pipeline to stall, and consume perhaps 1 clock of decode/exec/retire work.

  • [AM: Interface dispatch is slower than double dispatch through a vtable.]

    Yes and nearly never taken.  I’ve yet to see interface calls show up on any profile of any interface-heavy crazy Java programs.  Inline-caches replace 99+% of all interface calls.

  • Try to write “XXX” in Java with the same speed.  In this case:http://rapidxml.sourceforge.net/manual.html.  In general, these kinds of comments are useless, because who has the time to do it?  In this case, it looks to be about a month’s worth of work (you gonna pay my salary?) … and entirely doable… except I’d come up with an entirely parallel XML parser so I believe time to parse could be dropped from roughly ‘strlen()’ on a long string to ‘strlen() divided by #CPUs’.  The thing these kinds of comments totally miss is this: plain ‘olde Java-looks-like-C code translates to machine instructions same as C does.
  • [TNT: Where does C# stand compared to the mentioned languages?]

    Not that I track C# all that closely but… I believe the JIT produces substantially slower code than Java; Microsoft leans pretty heavily on static compilers (and have a better statically-compiled-code integration story than Java does).  Also, Java’s Memory Model is well tested & well used; C# equivalent appears to be not so robust in part because of the requirement to run all that old code.  A real C# expert should chime in here, I’m not able to give C# a fair treatment.

    [IK: C# would land in the same bucket as Java.  It might be slightly slower because more money and man-years were put in JVM, or it might be slightly faster because of unsafe and structs and stuff.

    “Slightly” means “a few benchmarks would be ruined”. Like, I’ve heard, CLR did NOT do interface-vs-implementation profiling, thus some code gained 10x boost by replacing all occurences of “IList” with “List” (it couldn’t figure out how to dispatch calls to concrete List class really fast when the slot type was IList).]

    [PL: C# performance would fall into the roughly the same range as Java. I have a RETE engine written in Java and C# and the java version is faster. One area where CLR takes a performance hit is autoboxing/unboxing. Atleast that’s from my experience with my rule engine. Aside from that, I would say the performance difference isn’t significant.]

    Last round of anecdotal evidence I gathered (now 2 yrs old) showed Java JITs well ahead of C# JITs.  Would love to see some hard numbers here.

  • [CC: I still don’t understand why they do not cache the validated and optimized memory image for next time the application is launched. .NET can do this.]

    It’s a really hard problem for multi-threaded programs – which typically do parallel class loading – and lazy compilation -which typically inlines the classes that are loaded *at the moment*.  Re-using previously JIT’d code will require, amongst other things, that your code loading order is the same, and the last code-load order you JIT’d from probably depended on stuff like network packet arrival order.  Given that startup time for big User GUI apps is typically NOT the JIT (it’s the DISK), I personally have not been very motivated to try and do cached code optimizations.

    On purpose, I’m being sloppy in the numbers I report here… because I don’t want to spend my entire life beating this tired horse.  But if somebody reports something widely different from what I’m seeing, I’m happy to dig in further – if the reporter is also.  I don’t have access to every kind of compiler & system on the planet so I can’t repro other peoples’ results easily.  Also in my experience, the number one reason for conflicting reports here is because the reporter has something really simple wrong on their end and a short email session will clear it up.

    [WW: Here’s a hint. Next time you write silly code for comparison, make them do something more useful than integer operations and basic maths… those will always be the same (or close) between a compiled language and an interpreted one with JIT.]

    Sorry but I have a life outside outside endless blogging… and 100KLOC examples aren’t useful in a blog format anyways.

     


    String Hash Example:

    Complete C++ code:  http://pastebin.com/d280c1cd4
    Complete Java code: http://pastebin.com/m541c4655
    Here’s a bit of code computing string hashes that looks ideal for a static compiler (ala C/C++), yet Java is tied in performance.  I used fairly recent versions of ‘g++ -O2’ and ‘java -server’ on a new X86 (-server is the default for me, so no cmd-line argument needed).  The inner loop is:

     

      int h=0;
      for( int i=0; i<len; i++ )
        h = 31*h+str[i];
      return h;

    Here I ran it on a new X86 for 100 million loops:

    > a.out         100000000
    100000000 hashes in 5.636362 secs
    > java str_hash 100000000
    100000000 hashes in 5.745 secs

    Last year I ran the same code on an older version of both gcc & Java, and Java beat C++ by 15%.  Today C++ is winning by 2%… so essentially tied.  Back then the JVM did unrolling and “gcc -O2” did not, and this code pays off well when unrolled.

    [TD: In the String Hash example, is the unrolling done by the JIT or javac?]

    Done by the JIT.

     

     


    Sieve of Erathosthenes:

    Complete C++ code: http://pastebin.com/m3784c090
    Complete Java code: http://pastebin.com/m4b414295
    Here’s a simple Sieve of Eratosthenes, again compiled with g++ -O2 and run with java -server.  Again this looks ideal for a static C/C++ compiler and again Java is tied in performance:

     

     

      bool *sieve = new bool[max];
      for (int i=0; i<max; i++) sieve[i] = true;
      sieve[0] = false;
      sieve[1] = false;
      int lim = (int)sqrt(max);
      for (int n=2; n<lim; n++) {
        if (sieve[n]) {
          for (int j=2*n; j<max; j+=n)
            sieve[j] = false;
        }
      }

     

     

     

    I computed the primes up to 100million:

    > a.out      100000000
    100000000 primes in 1.568016 secs
    > java sieve 100000000
    100000000 primes in 1.548 secs

    So again essentially tied.

     

    [AJ: How do the sieve timings differ if you use an array of bits rather than of bytes?]

    Good question, but I bet they’re tied again.  The JIT does fine with bit-twiddley stuff.  Test it and let me know!

    [AL: Note that your sieve is not a true one: http://lambda-the-ultimate.org/node/3127]

    Cute!  Just swap ‘2*n’ for ‘n*n’… but this doesn’t change the C-vs-Java argument. 


     

    Profiling Enables Big Gains:

    Complete C code:
    vcall.cpp  http://pastebin.com/m70dbe7d6
    vcall.hpp  http://pastebin.com/m13055a8c
    A.cpp      http://pastebin.com/m5aa1b232
    B.cpp      http://pastebin.com/m2e46ec23

    Complete Java code:
    vcall.java http://pastebin.com/m149bbdf0
    A.java     http://pastebin.com/m2e33d6df
    B.java     http://pastebin.com/m2b1d75bb

    This bit of code makes a virtual-call in the inner loop of a simple vector-sum… and selects the v-call target based on a command-line argument.  The JVM profiles, decides there’s only 1 target and inlines … and unrolls the loop and discovers a bunch of simple math that collapses in the optimizer.  The C/C++ compiler can’t do it because there really are 2 possible targets at static compile time.  Delaying the compilation until after profiling can enable major optimizations downstream.  I compiled the C++ version with ‘g++ *cpp’ and the java version as ‘javac *java’.

     

        int sum=0;
        for (int i = 0; i < max; i++) 
          sum += val();  // virtual call
        return sum;

    Here I run it on the same X86:

    > a.out 1000000000 0
    1000000000 adds in 2.657645 secs
    > java vcall 1000000000 0
    1000000000 adds in 0.0 secs

    The Java code is “infinitely” quick: after JIT’ing the -server compiler essentially deduces a closed-form solution for the answer and can report the correct result with a few bits of math… no looping.  This example is ridiculous of course, but it points out the obvious place where dynamic compilation beats static compilation.  This “make a virtual call into a static call” optimization is a major common frequent optimization JIT’s can do that static compilers cannot.

     

     [QB: If your compiler can reason that the virtual function is pure, then the entire loop can be folded into single vcall and multiply. ]

     

    [ALJ: It’s worth noting that you can achieve the same effect – with the compiler turning multiple additions into a single multiply – by using C++ templates.]

    My example is perhaps too trivial; I can get the same performance benefits with a non-pure function (so no chance of using ‘pure’ in static setting).  In practice, you get dozens to hundreds of Interfaces turned into concrete Classes, and hundreds to thousands of calls to such optimized… using a code-cloning technique like C++ templates is going to blow you out with exponential code.

    [QB: Although initially expensive and not quite thread safe self modifying code will do the trick here. Alternatively, you could use existing dynamic recompilation techniques to make it thread safe, which I understand is probably veering off into dynamic compilation land…]

    Exactly inline-caches are thread-safe self-modifying code… but they still look like a call (inline caches make vtable calls as cheap as plain calls).  In this case the big gain comes from removing the call altogether, which means knowing there’s only 1 implementation of the virtual.


     

    I hope this clears up where I stand on this issue…  and I’m (sorta) looking forward to the flamefest which is surely coming…

    Cliff

     

     

     

     

     

     

    2009 ECOOP

    Long delayed, but at last I found time to publish my notes.

     

    I got a free all-expense paid trip to Italy, and all I had to do was give a keynote speech (slides) at 2009 ECOOP.  It’s a nice gig if you can get it.  Travel to Genoa from San Francisco takes awhile; my 8:30am SF flight got me in Genoa about 2pm on the *next* day – plus 2 hrs lead time at SF, plus a 1hr drive to SF – I started my “day” at 6am on Friday and ended it at 8pm on Saturday.  I managed to stay up till dinner (barely) then crashed pretty hard.  Sunday was my day for sightseeing; I managed to wander pretty far all around Genoa; rode some funiculars; walked all over; took loads of pictures; ate weird food and generally acted like a typical American tourist. 

    There have been loads of sidewalk vendors; all Blacks aggressively selling fake designer purses and sunglasses laid out on sheets on the sidewalk.  Monday afternoon I got to watch a bust go down; lots of yelling, they grabbed their sheet bundles and took off running with plain clothes police in hot pursuit.  Somebody lost his wares and the police quit chasing to grab the loot.  It’s Tuesday evening and the Blacks haven’t returned.  I promised my wife I’d get her a cheap Gucci purse and now I’m regretting I didn’t do it earlier.

     

    Hotel Bristol is a nice older place (means: lots of paint chips and paint layers, big chandeliers, gold leaf trim, dark wood, creaky old elevators; rooms are a mix of really old and really new; follow the link and oogle at the pictures).  My room is vast with 15ft ceilings (with chandelier of course) and a large jacuzzi tub; enough room to put up a basketball hoop.  The conference food has been pretty nice (free good wine with every meal); lunches are served in the Palace Ducalle – historic summer residence of Doges.  I can’t do the setting enough justice; the website is nice but the pictures don’t really tell the story.  50ft ceiling with 20ft chandeliers and a dozen 8ft marble statues; gold leaf trim on every fresco; stunning paintings the size of basketball courts…  I totally laid down on the floor and stared at the ceiling for a good hour.  The local restaurants, on the other hand, are a hit-or-miss affair; food is only served at certain hours; sometimes the food has been very good and sometimes no better than the local deli around the corner from Azul.

     

    The ICOOOLPS workshop was on Monday; I got invited to talk there at the last minute.  I had some misgivings but most of the papers at the workshop were pretty nice; I enjoyed myself.  I’m still fighting jet-lag (today is Tuesday) pretty hard, so I’m writing this to try and stay awake until dinner.  The papers are here.

    1st up – The ICOOOLPS workshop.  ECOOP notes are further down.  As usual I abbreviate ruthlessly; skip some talks; take notes in a stream-of-consciousness style.  Caveat Emptor.  And before I forget:

    The “Nick Mitchell SimpleDateFormat Challenge”: parse a sample common SDF and JIT code to translate Date objects to ASCII efficiently.  Similar to C/C++ compiler optimizations for ‘printf’ strings.

     

    Let me know when you got something working.

     

    Towards an Actor-based Concurrent Machine Model

    “delegation based” machine model; dynamic separation of concerns “MDSOC”

    Machine model: objects, msgs, delegation.  High lvl objects represented as at least 2 low-level objects; a “proxy” and a real object.  Indirection to allow message interception.  All msg-sends get forwarded thru the proxy (“message” here is a java-lvl function call, not an OpenMPI msg – but could be either in a distributed system).

    Aspect-Oriented-Programming – intercept msg-send links (between Class-proxy and methods in the actual Class-body), etc.  How to do CHA with basically dynamic sub-class insertion?  Just re-JIT in the new class hierarchy?

    Ok – now new stuff… want concurrent in the delegation function.  Actor is a collection of local objects (and ptrs to remote objects).  Fcn calls between local objects are fast; fcn calls across actors are basically remote-msg send. Then receiving a msg invokes a co-routine in the remote actor.

    Ugh, using ‘yield’ with the co-routines.  Sounds buggy to me: semantics of dead-lock will depend on proper calling ‘yield’. 

    (assuming a VM w/ actors not thread support)

     

     

     


    An Efficient Lock-Aware Transactional Memory Implementation
    Justin Gottschlich

    Trying to integrate into the “boost” STM system.

    Locks+TM break things.
    But locks are prevalent – must be able to compose with them.

    Example: Locks outside Transaction
    – T1 runs lock; T2 runs transaction
    Example: swap code.  Works with only-locks or only-TMs but breaks if mixed

    Locks inside Trans
    – T1 runs trans, calls lock; T2 runs either trans or lock

    Prior work: “full lock protection” – if you take a lock, then the XTNs are all stopped/aborted/etc and you only get locking behavior.

    Offers a programmer solution: programmer lists lock/XTN conflicts and the XTN system deals when you take a conflicting lock.  (I believe this is unrealistic).

     

     

     


    Eric Jul? – Whiteboard not slides….

    Blurring the line between compilers & runtimes

    Gives some examples already done by hotspot

    Nice discussion, but nothing new.  Mostly trying to get a discussion going about crosing compilers/JITs & runtimes.  He might not have been aware about what the JIT already does.

     

     

     


    Trace-Based Type Specialization in JavaScript
    Andreas Gal

    Same basic talk as given at PLDI.  TraceMonkey? FireFox 3.5
    JavaScript & Flash instead of Java/C/C++

    JS has been very slow (interpreted only).  But is here to stay; very popular & growing.  ActiveX & client-side Java dying out (not sure about client-side Java which was never popular in the 1st place).

    Static typing makes life easy, but dynamic typing is required.  More complex data types and more runtime tests.  Tag bits to be checked; overflow to Double, etc.

    Coming around to wanting a HotSpot like dynamic JIT’ing thing based on types that happen to be true at the moment.  Basically, types in program traces remain stable over time.

      – for( var i=0; i<100; ++i ) { /* nothing */}

    loop:
      if ( int(i) >= 100 ) break;
      i = box(int(i)+1);
      goto loop;

    So trace can record, e.g. that ‘i’ has always been an ‘int’ so far.  Trace has a guard on the input types, and are type-specific.  Function calls are inlined in the traces, along with guard statements to check that you are taking the same control-flow. 

    Tracing loops, and can verify trace is type-stable across loop.  So can remove e.g. boxing in the loop.

    But real traces have many exits and the many exits are really taken.  So trace along each exit.  Build trace-trees, rooted at loop headers (exponential growth of trace trees as the loop DAG is split out?).  Can only link back into original tree if types all match up – else need a new loop /tree header.

    Suffering for lack of a language spec; JS programs are driving the language semantics (i.e., w/a new interpreter some JS programs fail so we call the interpreter buggy- even if it meets the loose “spec”).  No nothing of a memory model or threading; lots of other holes in the language.  Echo’s of what Java went through: the popular implementations defined the spec.

     

     

     


    Tracing the Meta-Level: PyPy’s Tracing JIT Compiler
    Carl Bolz

    PyPy is a tracing JIT compiler.  Now apply this tracing JIT to an interpreter. 
    Compiler “Restricted Python” RPython – can target C, Java, .Net
    Various interpreters: python, prolog, smalltalk, scheme, etc

    Basic idea: 
    trace loops; look for type-stable loop execution; look for similar code path loops.

    But the dispatch loop in interpreters means you never execute the same loop code twice (because each time you are running a new different bytecode).  Goal: trace user-mode program, not the language interpreter.  Effective the tracing interpreter unrolls the bytecode dispatch loop.  Provde 3 hints to laguange interpreter.  – hint for position key; here is the language interpreter’s BCI; here are backwards edges; here is the PC modification

    Works fairly well to clean out the language interpreter from the traces.
    Then get traces which are fairly clean & can JIT them.

    Bears resemblance to partial evaluation, arrived at by different means.  Future work: better optimization of traces; some escape analysis to remove boxing operations.  Optimize frame objects, apply to larger programs.

     

     

     


    Faster than C#: efficient implementation of dynamic languages on .NET
    Antonio Cuni

    Trying to make e.g. Python faster on .Net.
    Looking at IronPython, Jython, JRuby, Groovy
    (also Self, JavaScript/TraceMonkey/V8)

    Why so slow?  Hard to compile efficiently; lack of type info at runtime; VMs not optimized to run them.  .Net is a multi-language VM, right (sure, as long as the language is C# – his quote, not mine!).  JVM is in better shape, but still heavily optimized for Java.

    JIT compiler?  Wait until you know what you need; interleave compile-time and runtime; exploit runtime info.  JIT layering; fight the VM…

    PyPy; JIT compiler generator; Python semantics “for free”.  JIT frontend not limited to python; JIT backend: x86 or CLI/.NET backends.  Fun games with partial eval: assume e.g. Python bytecodes to be constant & constant-prop them into the python interpreter.

    Do even more fun with constants than HotSpot+OnStackReplacement – totally doing speculative constant value JIT’ing – if this argument is of value ‘3’ then here is the JIT’d code.  Trick is to pick which variable to constant- speculate on (and getting that spec value as well).

    Not yet doing all of Python, but getting really great speedups.

     

     

     


    Strata Virtual Machine

    Software Dynamic Translation – read a binary, & translate it/jit the translated code.

    Using a code-based hash-table lookup.  Hash; jump to hash-table entry.  Miss: jump to ‘strata’ interpreter
    Hit: jump to bucket; bucket checks target (like HS inline cache, check at target)

    HotSpot could use this to make more efficient v-calls?  Hash; indirect jump to nearby code table; table jumps to target & checks target; expect no misses after warmup.  Nah… still got indirect branch in there.

    Looks like a standard binary dynamic translation type stuff (converting PPC instructions to Java bytecodes?)

     

     

     


    Automatic Vectorization in JIKES RVM

    Using SIMD ops on X86.  Can’t use BURS/BURG to pattern-match vector ops.  Unroll loops to make the patterns more obvious.

    Basically getting some SIMD stuff to work (but talk given by PC not by author, and PC believes this is not the right way to discover SIMD ops).  I also believe this… it’s not a clean fit to BURS.

    Code emit via simple bitmask/shift/and/or.

     

     

     


    JIT Compilation on ARM Processors
    Michele Tartara

    ARM – 31 32-bit GPRs, 16 available at a time?.  SP, LNK, PC are GPRs.  Fixed format 32-bit ops.  Dynamic fast compilation (cell-phone targets).  No BURS or tiling, just greedy rules.  This is probably the right way to go always.

     

     

     


    ECOOP 2009 Proper Starts

    ECOOP is being held in the Piazzo Ducall (Ducal Palace) – the main conference presentation chamber has perhaps 40ft ceilings, acres of gold leaf on the walls in between the massive medieval paintings.  The speaker dias was clearly meant to hold a throne or the orchestra; it’s a circular marble dias perhaps 20ft in diameter with marble balustrade and marble railings.  The high tech screen & projector setup in the middle is really anachronistic.

    The adjacent formal ballroom is much larger; 50ft+ ceilings; chandeliers of at least 20ft tall and 30ft in diameter; paintings & gold leaf in plenty plus also perhaps a dozen 8ft marble statues with ancient greek themes.

     

     


    Keynote – Classes, Jim, but Not as we Know Them
    Simon P Jones

    As usual for Simon, he gave a wonderful presentation.

    History – Haskell is 20yrs old.  Lots of fun with new languages & machines & ideas (Functional Programming).  After Backus’s Turing Award lecture opened the gates, a storm of new languages hit the field.  Took 10 yrs before Haskell formed out of the chaos of new languages.

    Lifetime of most programming languages:

    • Invented by 1 person, used by 1 person, dies within a year.
    • Slow death version: invited by 1 person, used by 100, dies in 5 yrs
    • Successful: used by 100K+ programmers, never dies, e.g. Fortran still alive
    • Haskell?  hovering around 1000 programmers, but sharp recent uptick?

    Haskell might cross over into immortal language (100K+ programmers)?  Hackage: user uploaded libs, reaching 300 new libs uploaded /month, and approaching 1million downloads.

    Type Classes- 
    Shows examples of lots of things you want actions on a wide variety of types:

    • equality (for set membership?)  how to do equality of functions?
    • ordering (sorting lists)
    • print/showing
    • serialization
    • computing hash function

     

    (so far sounds like Java interfaces)
    For all ‘a’ such that ‘a’ is an instance of class ‘Num’
      square Num a ==> return a*a

    Num classes have the following methods: +,-,*,/, etc.
    The implementation of Num classes binds ‘+’ to eq ‘plusInt’ or ‘plusFloat’

    Implementation is perhaps more clever than Java invokeinterface bytecode lookup: is passed in a vector of fcns, of the correct type of interface ‘Num’.  The fcn lookup happens on this particular ‘vtable’ – call it an itable, but it’s really a unique v-table per interface.

    For Haskell, this is all done with syntatic sugar over the basic Haskell.  Which itself is very cool (e.g. Haskell compiler (javac equivalent) is doing interface calls w/syntatic sugar).

    CLIFF NOTES: This is a faster way to do Java interface calls!!!  Or maybe this is how they are impl’d?  Find the i-table from a list of implemented interfaces, then pull out the correct fcn from the i-index.  In his world the i-table is pulled out early and kept pulled out, which avoids the point-by-point lookup of interfaces.  Ahhh…. he’s statically computing the i-table ahead of time during compilation, using some combo of type unification and the closed-universe assumption.  Not applicable to Java.

    Able to handle fcns which themselves take arguments of fcns with any signature, as long as the fcn handles the correct interface.

    Doing type-based dispatch instead of value-based dispatch.  Not the same as Java interfaces…
    In Haskell, can bind a value arg to multiple interfaces all at once.
    In Haskell, existing classes can be made instances of new interfaces (duck typing)

    Haskell has NO subtyping, but uses the interface (well: type class) notion instead.

    Haskell: types are inferenced typically, type annotations occasionally needed
    Java: types are declared explicitly; inference occasionally possible

    Haskell does type unification, so things like Java’s ‘equal’ call – which is normally a fcn of (Object,Object)=>Bool – in Haskell, we KNOW ahead of time that the 2 args are of the exact same type.  Hence the implementations of various equals calls do not need to ask the “instanceof” question that all Java equals call implementations all start with.

     

     


    Making Sense of Large Heaps
    (Nick Mitchell, IBM)

    Where does the memory go in large java apps.

    Example from 2008, Cobol to Java.  In Cobol: 600 bytes; in Java 4492 bytes.
    Blow up from delegation, bulky collections, or too many data fields.

    Size examples: 58M objects, 3Gig ram, 8000+ types.

    Yeti Tool – does smart cluster of datatypes (String w/char[], or HashMap w/HashMapEntry[], HashMapEntry, HashMapEntrySet).

    Yeti also does dominance of sharing; notices single-owner root/tree bits, and then breaks out the shared parts at the leaves.  Also detect e.g., similar kinds of sharing, such as array B[] points to a set of B’s; but also a linked list of Entry’s points to the elements of B[] in turn.  Really lumping together things with the same owner; maximal dominance w/same ownership relationship.  Does nice heap shape summaries, followed by size info. 

    Picture is really alternate sandwich effect, alternating between the Collections you selected, and the user-data at that layer of the ‘sandwich’.  Edges hold the various sizes of things.  They fold up collection ‘backbones’. 

    If we get the data structures right, can easily shrink heaps by 2x to 10x!!!  Huge speedups possible.  Shows examples of people using CHM to hold 1 element (but programmer doesn’t want to think about expected size usage, so does the easy thing and grabs CHM).

     

     


    Scaling CFL-Reachability-Based Points-To Analysis Using Context Sensitive Must-Not-Alias Analysis

    Basically computes a must-not-alias approximation; where it holds do not bother to run a more expensive pts-too analysis.

    Some experiments are medium programs (SpecJVM – 7 progs, Dacapo – 4 progs, 8 other progs).  Can compute pts-to graph in 2x to 5x smaller.  Compute time for the pts-to analysis is 3x faster on average than prior work (but still in the several minutes range)… but what do you do with the info?  (this is my standard question when presented with better/faster/new&improved points-to work: now that you have the info how much does it *really* help speed up programs?  especially compared to having strong typing in the first place)

    Nice presentation of a pts-to analysis, but hit right at my sleepy jet-lag afternoon siesta; had trouble staying awake.

     

     


    Intel NePalTM – Design and Implementation of Nested Parallelism for Transactional Memory Systems.  (Intel)

    Issue is using locks instead a XTN; the locks are for the nested parallelism, but want the whole operation to be atomic.  Azul makes no attempt to use the HTM to span across threads.  Can’t work for us, because the threads need to communicate through shared memory – and that shared memory will blow out the HTM. 

    This is Intel’s compiler-gen’d STM system for C++.  Handles either optimistic (read-friendly) or pessimistic (write-friendly) concurrency; roll-back & retry.

    Sigh – shows basically no speedup over plain locking.

    (and perhaps not the best name….)

     

     


    Debugging Method Names

    What’s a naming bug?  A mismatch between the name and what the code does.

    Break names apart (CamelCase in Java), do some grammer abstraction.
    getLastName ==> get – last – name  ==> (verb) (adj) (noun), etc

    Then also do semantic analysis of code.  Look for things like: has a loop, has a incoming parm, returns a value (from multiple points), includes some run-time checks in the loop, etc.  Decide it’s a search loop, returning a result or NULL.

    So come up with lots of data points for names & methods.
    Then mine the rules from a large corpus of code.

    E.g. a “contains-XXX” name method should NOT be a void return, because a “contains-XXX” name implies a question with an answer.  In fact, in probably ought to return a bool.

    Then with rules, apply rules to a program and report violations.
    Turns out pretty easy to suggestion candidate replacement names.

    Example:  
      public void isCaching(bool b) { this.caching=b; }
    Method isn’t asking a question, it’s setting value.
    Tool suggestions name should be “setCaching”.

    Example:
      public bool equals( Object o) {
        if( this==o ) return true;
        if( o instanceof Value )
          return equals((Value)o);
        return equals(new Value(o.toString()));
      }
    So not really an ‘equals’ method, because it makes a new ‘Value’ object

    More examples, ends up finding interesting bugs as well.  Hard part is to decide whether or not to change the name or change the code.  Sometimes the name is correct, but the code is ‘wrong’ here – and also wrong elsewhere.  Basically finding a poor factoring of code.

     

     


    Mapping & Recommending API usage Patterns

    API, libraries – often complex, lots of classes & methods; complex; hard to use.

    People use: books & docs, forums & newgroups; code search engines.

    Example: we wish to add an item to Eclipse menu.  Browse docs; find potential call: “appendToGroup”.  Google search finds 151 code snippets (at time of paper submission) and 287 code snippets (at time of talk).  The returned code snippets at least 2 different API usages. 

    ‘MAPO’ Tool does search; parses code; clusters snippets based on used; mines patterns; recommends final patterns.  Tool needs to inline non-3rd-party methods (scattered implementation).

    Examples hard to follow…  I’ve been bit here before repeatedly (complex API & no way to learn how to use it easily).  So very sad that it’s so hard to follow his talk. 

     

     


    Supporting Framework Use via Automatically Extract Concept Implementation Templates

    Another talk about finding & figuring out reuse of existing code.

    Choice between Templates vs Documentation had much less impact on dev time than the task’s concept complexity.

     

     


    I skipped all the refactoring talks.  Exhaustion is setting in, plus I’ve seen one or two of these talks before.

    Went to a couple of talks attempting to deal better with constructors.  Mostly it’s issues like publishing objects before construction completes, or in Java call an overridden v-call from a constructor (which means the sub-class v-call executes before the sub-call constructor can work on the object).  Final fields, read-only objects still being constructed; not-null field properties before you set the value; et al; there’s a bunch of problems that stem from constructors not being instantly-quick – and if they take time, what does the object look like in that in-between state.

    This is a general language issue with constructors.  They are a nice notion, but unless they are instantly quick, you have to live with having objects which are not ‘complete’ or ‘constructed’ yet and hence do not meet the expected invariants for the object.

     

     


    Inlining Security Policies

    Want to inline code into the original program; the new program either runs or is truncated (if a security violation happens).  Must show no *new* behaviors from the inlining, also all old behaviors are allowed (except the secuirty ones), etc.  Single-threaded versions of these inlinings all exist and are well documented.

    Multi-threaded is harder: general case isn’t solvable (probably because the monitors themselves become non-deterministic).  But if the monitor is race-free, (and some other easy properties), then you can do it.  Monitor is normally a state-machine.

     

     


    Cliff Click Talks About Azul

    My keynote went well enough; lots of Q&A from the senior people in the audience afterwards.  We had to stop the Q&A after running quite a bit over.  I also got a lot of positive comments afterwards, so I think people really enjoyed the talk.  I’ll add that I enjoyed both Simon Jones’ and Dave Ungars’ keynotes also.  This ECOOP was unusual in having 3 “gentleman hackers” give keynotes; we’re all three both very scholarly and very prolific programmers.  Doug Lea added in a later email:

    “Cliff: definitely the best talk I’ve seen you give (which is quite a few). Someday  you’ll have to figure out how you packed in so much technical content yet still fully captured attention of people supposedly without the background to understand most of it.” 

    High praise from Doug Lea, so I musta done something right.   🙂

    Trip home is uneventful, except for Paris’s Charles DeGaul airport being the usual zoo.  I get up at 4:30am and am in a Genova taxi by 5am for the 1/2hr drive to the airport.  Except that there’s no traffic at this hour and the driver thinks he’s Mario Andretti.  We make the trip in about 10 minutes.

    So I’m way early into Genova’s airport for a 7:10am flight.  No power connections for my laptop, for-pay wi-fi (not worth it), no shops are open – not even coffee.  No air-partnership; I cannot check in for my USAirways flight from the Air France desk in Genova.  I blog for the next hour and a half.  The flight leaves on-time and the two hours flying are very smooth.  Kudos to Air France.  Then we land at Charles DeGaul and the madness begins.  We have to walk down the plane stairs and across the tarmac and about 1/4 mile more to the main terminal – but actually it’s the small-plane terminal.  After figuring out I’m not in Terminal’s 1, 2 OR 3, I get instructions to take bus Number 3 to the automatic train and take it to Terminal 1.  Of course the buses & bus stop are labeled- with LOTS and LOTS of numbers and plenty of French – takes a question of the drivers to figure out which is the #3 bus.

    The bus ride is longish (for an airport bus) and uneventful and then I find the train.  Thankfully the train is clearly marked and again a longer ride to Terminal 1 than I expect: Charles DeGaul is a really spread out airport.  Terminal 1 is a total zoo; I see lines of people snaking out all over filling the large hall, hundreds of people long.  Three times I ask people about which line they are in; none of the lines are marked and all snake back and forth through out the hall; there’s a steady stream of people traffic cutting through all the lines and the lines branch and rejoin helter-skelter.  Finally I find the ‘entrance’ to the line for flight #771 to Charlotte – just next to the line for Philly which clearly orbits the entire hall.  The Charlotte line is actually fairly short, maybe 20 people long, and after 10 minutes I’m talking to the agent.  No problem checking in and then I head off for gate 55 (cutting through the Philly line again).  Then it’s customs (another 10 minute line), and then a small shopping zone – I figure I’ll come back to it for a snack (no breakfast yes; nothing was open in Genova).  But then I discover there’s yet another line for security.  This one is about 20mins long and no way I’m going back for my snack and then back through security.  I figure it’s like US airports – once past security there will be food and such… WRONG!  It’s just the end of the lines; I’m in a seating area with about 10 gates and absolutely no way to get a drink or snack – or bathroom near as I can tell. It’s now 10:10pm and it’s taken me a solid hour to navigate Charles DeGaul to find my gate.  As I type, I’m standing by a laptop charging station (no chairs nearby), in the hopes I can do a little something on the laptop during the next 8 hrs of flight.

    “delegation based” machine model; dynamic separation of concerns “MDSOC”

    Machine model: objects, msgs, delegation.  High lvl objects represented as at least 2 low-level objects; a “proxy” and a real object.  Indirection to allow message interception.  All msg-sends get forwarded thru the proxy (“message” here is a java-lvl function call, not an OpenMPI msg – but could be either in a distributed system).

    Aspect-Oriented-Programming – intercept msg-send links (between Class-proxy and methods in the actual Class-body), etc.  How to do CHA with basically dynamic sub-class insertion?  Just re-JIT in the new class hierarchy?

    Ok – now new stuff… want concurrent in the delegation function.  Actor is a collection of local objects (and ptrs to remote objects).  Fcn calls between local objects are fast; fcn calls across actors are basically remote-msg send. Then receiving a msg invokes a co-routine in the remote actor.

    Ugh, using ‘yield’ with the co-routines.  Sounds buggy to me: semantics of dead-lock will depend on proper calling ‘yield’. 

    (assuming a VM w/ actors not thread support)

     

     

     


    An Efficient Lock-Aware Transactional Memory Implementation
    Justin Gottschlich

    Trying to integrate into the “boost” STM system.

    Locks+TM break things.
    But locks are prevalent – must be able to compose with them.

    Example: Locks outside Transaction
    – T1 runs lock; T2 runs transaction
    Example: swap code.  Works with only-locks or only-TMs but breaks if mixed

    Locks inside Trans
    – T1 runs trans, calls lock; T2 runs either trans or lock

    Prior work: “full lock protection” – if you take a lock, then the XTNs are all stopped/aborted/etc and you only get locking behavior.

    Offers a programmer solution: programmer lists lock/XTN conflicts and the XTN system deals when you take a conflicting lock.  (I believe this is unrealistic).

     

     

     


    Eric Jul? – Whiteboard not slides….

    Blurring the line between compilers & runtimes

    Gives some examples already done by hotspot

    Nice discussion, but nothing new.  Mostly trying to get a discussion going about crosing compilers/JITs & runtimes.  He might not have been aware about what the JIT already does.

     

     

     


    Trace-Based Type Specialization in JavaScript
    Andreas Gal

    Same basic talk as given at PLDI.  TraceMonkey? FireFox 3.5
    JavaScript & Flash instead of Java/C/C++

    JS has been very slow (interpreted only).  But is here to stay; very popular & growing.  ActiveX & client-side Java dying out (not sure about client-side Java which was never popular in the 1st place).

    Static typing makes life easy, but dynamic typing is required.  More complex data types and more runtime tests.  Tag bits to be checked; overflow to Double, etc.

    Coming around to wanting a HotSpot like dynamic JIT’ing thing based on types that happen to be true at the moment.  Basically, types in program traces remain stable over time.

      – for( var i=0; i<100; ++i ) { /* nothing */}

    loop:
      if ( int(i) >= 100 ) break;
      i = box(int(i)+1);
      goto loop;

    So trace can record, e.g. that ‘i’ has always been an ‘int’ so far.  Trace has a guard on the input types, and are type-specific.  Function calls are inlined in the traces, along with guard statements to check that you are taking the same control-flow. 

    Tracing loops, and can verify trace is type-stable across loop.  So can remove e.g. boxing in the loop.

    But real traces have many exits and the many exits are really taken.  So trace along each exit.  Build trace-trees, rooted at loop headers (exponential growth of trace trees as the loop DAG is split out?).  Can only link back into original tree if types all match up – else need a new loop /tree header.

    Suffering for lack of a language spec; JS programs are driving the language semantics (i.e., w/a new interpreter some JS programs fail so we call the interpreter buggy- even if it meets the loose “spec”).  No nothing of a memory model or threading; lots of other holes in the language.  Echo’s of what Java went through: the popular implementations defined the spec.

     

     

     


    Tracing the Meta-Level: PyPy’s Tracing JIT Compiler
    Carl Bolz

    PyPy is a tracing JIT compiler.  Now apply this tracing JIT to an interpreter. 
    Compiler “Restricted Python” RPython – can target C, Java, .Net
    Various interpreters: python, prolog, smalltalk, scheme, etc

    Basic idea: 
    trace loops; look for type-stable loop execution; look for similar code path loops.

    But the dispatch loop in interpreters means you never execute the same loop code twice (because each time you are running a new different bytecode).  Goal: trace user-mode program, not the language interpreter.  Effective the tracing interpreter unrolls the bytecode dispatch loop.  Provde 3 hints to laguange interpreter.  – hint for position key; here is the language interpreter’s BCI; here are backwards edges; here is the PC modification

    Works fairly well to clean out the language interpreter from the traces.
    Then get traces which are fairly clean & can JIT them.

    Bears resemblance to partial evaluation, arrived at by different means.  Future work: better optimization of traces; some escape analysis to remove boxing operations.  Optimize frame objects, apply to larger programs.

     

     

     


    Faster than C#: efficient implementation of dynamic languages on .NET
    Antonio Cuni

    Trying to make e.g. Python faster on .Net.
    Looking at IronPython, Jython, JRuby, Groovy
    (also Self, JavaScript/TraceMonkey/V8)

    Why so slow?  Hard to compile efficiently; lack of type info at runtime; VMs not optimized to run them.  .Net is a multi-language VM, right (sure, as long as the language is C# – his quote, not mine!).  JVM is in better shape, but still heavily optimized for Java.

    JIT compiler?  Wait until you know what you need; interleave compile-time and runtime; exploit runtime info.  JIT layering; fight the VM…

    PyPy; JIT compiler generator; Python semantics “for free”.  JIT frontend not limited to python; JIT backend: x86 or CLI/.NET backends.  Fun games with partial eval: assume e.g. Python bytecodes to be constant & constant-prop them into the python interpreter.

    Do even more fun with constants than HotSpot+OnStackReplacement – totally doing speculative constant value JIT’ing – if this argument is of value ‘3’ then here is the JIT’d code.  Trick is to pick which variable to constant- speculate on (and getting that spec value as well).

    Not yet doing all of Python, but getting really great speedups.

     

     

     


    Strata Virtual Machine

    Software Dynamic Translation – read a binary, & translate it/jit the translated code.

    Using a code-based hash-table lookup.  Hash; jump to hash-table entry.  Miss: jump to ‘strata’ interpreter
    Hit: jump to bucket; bucket checks target (like HS inline cache, check at target)

    HotSpot could use this to make more efficient v-calls?  Hash; indirect jump to nearby code table; table jumps to target & checks target; expect no misses after warmup.  Nah… still got indirect branch in there.

    Looks like a standard binary dynamic translation type stuff (converting PPC instructions to Java bytecodes?)

     

     

     


    Automatic Vectorization in JIKES RVM

    Using SIMD ops on X86.  Can’t use BURS/BURG to pattern-match vector ops.  Unroll loops to make the patterns more obvious.

    Basically getting some SIMD stuff to work (but talk given by PC not by author, and PC believes this is not the right way to discover SIMD ops).  I also believe this… it’s not a clean fit to BURS.

    Code emit via simple bitmask/shift/and/or.

     

     

     


    JIT Compilation on ARM Processors
    Michele Tartara

    ARM – 31 32-bit GPRs, 16 available at a time?.  SP, LNK, PC are GPRs.  Fixed format 32-bit ops.  Dynamic fast compilation (cell-phone targets).  No BURS or tiling, just greedy rules.  This is probably the right way to go always.

     

     

     


    ECOOP 2009 Proper Starts

    ECOOP is being held in the Piazzo Ducall (Ducal Palace) – the main conference presentation chamber has perhaps 40ft ceilings, acres of gold leaf on the walls in between the massive medieval paintings.  The speaker dias was clearly meant to hold a throne or the orchestra; it’s a circular marble dias perhaps 20ft in diameter with marble balustrade and marble railings.  The high tech screen & projector setup in the middle is really anachronistic.

    The adjacent formal ballroom is much larger; 50ft+ ceilings; chandeliers of at least 20ft tall and 30ft in diameter; paintings & gold leaf in plenty plus also perhaps a dozen 8ft marble statues with ancient greek themes.

     

     


    Keynote – Classes, Jim, but Not as we Know Them
    Simon P Jones

    As usual for Simon, he gave a wonderful presentation.

    History – Haskell is 20yrs old.  Lots of fun with new languages & machines & ideas (Functional Programming).  After Backus’s Turing Award lecture opened the gates, a storm of new languages hit the field.  Took 10 yrs before Haskell formed out of the chaos of new languages.

    Lifetime of most programming languages:

    • Invented by 1 person, used by 1 person, dies within a year.
    • Slow death version: invited by 1 person, used by 100, dies in 5 yrs
    • Successful: used by 100K+ programmers, never dies, e.g. Fortran still alive
    • Haskell?  hovering around 1000 programmers, but sharp recent uptick?

    Haskell might cross over into immortal language (100K+ programmers)?  Hackage: user uploaded libs, reaching 300 new libs uploaded /month, and approaching 1million downloads.

    Type Classes- 
    Shows examples of lots of things you want actions on a wide variety of types:

    • equality (for set membership?)  how to do equality of functions?
    • ordering (sorting lists)
    • print/showing
    • serialization
    • computing hash function

     

    (so far sounds like Java interfaces)
    For all ‘a’ such that ‘a’ is an instance of class ‘Num’
      square Num a ==> return a*a

    Num classes have the following methods: +,-,*,/, etc.
    The implementation of Num classes binds ‘+’ to eq ‘plusInt’ or ‘plusFloat’

    Implementation is perhaps more clever than Java invokeinterface bytecode lookup: is passed in a vector of fcns, of the correct type of interface ‘Num’.  The fcn lookup happens on this particular ‘vtable’ – call it an itable, but it’s really a unique v-table per interface.

    For Haskell, this is all done with syntatic sugar over the basic Haskell.  Which itself is very cool (e.g. Haskell compiler (javac equivalent) is doing interface calls w/syntatic sugar).

    CLIFF NOTES: This is a faster way to do Java interface calls!!!  Or maybe this is how they are impl’d?  Find the i-table from a list of implemented interfaces, then pull out the correct fcn from the i-index.  In his world the i-table is pulled out early and kept pulled out, which avoids the point-by-point lookup of interfaces.  Ahhh…. he’s statically computing the i-table ahead of time during compilation, using some combo of type unification and the closed-universe assumption.  Not applicable to Java.

    Able to handle fcns which themselves take arguments of fcns with any signature, as long as the fcn handles the correct interface.

    Doing type-based dispatch instead of value-based dispatch.  Not the same as Java interfaces…
    In Haskell, can bind a value arg to multiple interfaces all at once.
    In Haskell, existing classes can be made instances of new interfaces (duck typing)

    Haskell has NO subtyping, but uses the interface (well: type class) notion instead.

    Haskell: types are inferenced typically, type annotations occasionally needed
    Java: types are declared explicitly; inference occasionally possible

    Haskell does type unification, so things like Java’s ‘equal’ call – which is normally a fcn of (Object,Object)=>Bool – in Haskell, we KNOW ahead of time that the 2 args are of the exact same type.  Hence the implementations of various equals calls do not need to ask the “instanceof” question that all Java equals call implementations all start with.

     

     


    Making Sense of Large Heaps
    (Nick Mitchell, IBM)

    Where does the memory go in large java apps.

    Example from 2008, Cobol to Java.  In Cobol: 600 bytes; in Java 4492 bytes.
    Blow up from delegation, bulky collections, or too many data fields.

    Size examples: 58M objects, 3Gig ram, 8000+ types.

    Yeti Tool – does smart cluster of datatypes (String w/char[], or HashMap w/HashMapEntry[], HashMapEntry, HashMapEntrySet).

    Yeti also does dominance of sharing; notices single-owner root/tree bits, and then breaks out the shared parts at the leaves.  Also detect e.g., similar kinds of sharing, such as array B[] points to a set of B’s; but also a linked list of Entry’s points to the elements of B[] in turn.  Really lumping together things with the same owner; maximal dominance w/same ownership relationship.  Does nice heap shape summaries, followed by size info. 

    Picture is really alternate sandwich effect, alternating between the Collections you selected, and the user-data at that layer of the ‘sandwich’.  Edges hold the various sizes of things.  They fold up collection ‘backbones’. 

    If we get the data structures right, can easily shrink heaps by 2x to 10x!!!  Huge speedups possible.  Shows examples of people using CHM to hold 1 element (but programmer doesn’t want to think about expected size usage, so does the easy thing and grabs CHM).

     

     


    Scaling CFL-Reachability-Based Points-To Analysis Using Context Sensitive Must-Not-Alias Analysis

    Basically computes a must-not-alias approximation; where it holds do not bother to run a more expensive pts-too analysis.

    Some experiments are medium programs (SpecJVM – 7 progs, Dacapo – 4 progs, 8 other progs).  Can compute pts-to graph in 2x to 5x smaller.  Compute time for the pts-to analysis is 3x faster on average than prior work (but still in the several minutes range)… but what do you do with the info?  (this is my standard question when presented with better/faster/new&improved points-to work: now that you have the info how much does it *really* help speed up programs?  especially compared to having strong typing in the first place)

    Nice presentation of a pts-to analysis, but hit right at my sleepy jet-lag afternoon siesta; had trouble staying awake.

     

     


    Intel NePalTM – Design and Implementation of Nested Parallelism for Transactional Memory Systems.  (Intel)

    Issue is using locks instead a XTN; the locks are for the nested parallelism, but want the whole operation to be atomic.  Azul makes no attempt to use the HTM to span across threads.  Can’t work for us, because the threads need to communicate through shared memory – and that shared memory will blow out the HTM. 

    This is Intel’s compiler-gen’d STM system for C++.  Handles either optimistic (read-friendly) or pessimistic (write-friendly) concurrency; roll-back & retry.

    Sigh – shows basically no speedup over plain locking.

    (and perhaps not the best name….)

     

     


    Debugging Method Names

    What’s a naming bug?  A mismatch between the name and what the code does.

    Break names apart (CamelCase in Java), do some grammer abstraction.
    getLastName ==> get – last – name  ==> (verb) (adj) (noun), etc

    Then also do semantic analysis of code.  Look for things like: has a loop, has a incoming parm, returns a value (from multiple points), includes some run-time checks in the loop, etc.  Decide it’s a search loop, returning a result or NULL.

    So come up with lots of data points for names & methods.
    Then mine the rules from a large corpus of code.

    E.g. a “contains-XXX” name method should NOT be a void return, because a “contains-XXX” name implies a question with an answer.  In fact, in probably ought to return a bool.

    Then with rules, apply rules to a program and report violations.
    Turns out pretty easy to suggestion candidate replacement names.

    Example:  
      public void isCaching(bool b) { this.caching=b; }
    Method isn’t asking a question, it’s setting value.
    Tool suggestions name should be “setCaching”.

    Example:
      public bool equals( Object o) {
        if( this==o ) return true;
        if( o instanceof Value )
          return equals((Value)o);
        return equals(new Value(o.toString()));
      }
    So not really an ‘equals’ method, because it makes a new ‘Value’ object

    More examples, ends up finding interesting bugs as well.  Hard part is to decide whether or not to change the name or change the code.  Sometimes the name is correct, but the code is ‘wrong’ here – and also wrong elsewhere.  Basically finding a poor factoring of code.

     

     


    Mapping & Recommending API usage Patterns

    API, libraries – often complex, lots of classes & methods; complex; hard to use.

    People use: books & docs, forums & newgroups; code search engines.

    Example: we wish to add an item to Eclipse menu.  Browse docs; find potential call: “appendToGroup”.  Google search finds 151 code snippets (at time of paper submission) and 287 code snippets (at time of talk).  The returned code snippets at least 2 different API usages. 

    ‘MAPO’ Tool does search; parses code; clusters snippets based on used; mines patterns; recommends final patterns.  Tool needs to inline non-3rd-party methods (scattered implementation).

    Examples hard to follow…  I’ve been bit here before repeatedly (complex API & no way to learn how to use it easily).  So very sad that it’s so hard to follow his talk. 

     

     


    Supporting Framework Use via Automatically Extract Concept Implementation Templates

    Another talk about finding & figuring out reuse of existing code.

    Choice between Templates vs Documentation had much less impact on dev time than the task’s concept complexity.

     

     


    I skipped all the refactoring talks.  Exhaustion is setting in, plus I’ve seen one or two of these talks before.

    Went to a couple of talks attempting to deal better with constructors.  Mostly it’s issues like publishing objects before construction completes, or in Java call an overridden v-call from a constructor (which means the sub-class v-call executes before the sub-call constructor can work on the object).  Final fields, read-only objects still being constructed; not-null field properties before you set the value; et al; there’s a bunch of problems that stem from constructors not being instantly-quick – and if they take time, what does the object look like in that in-between state.

    This is a general language issue with constructors.  They are a nice notion, but unless they are instantly quick, you have to live with having objects which are not ‘complete’ or ‘constructed’ yet and hence do not meet the expected invariants for the object.

     

     


    Inlining Security Policies

    Want to inline code into the original program; the new program either runs or is truncated (if a security violation happens).  Must show no *new* behaviors from the inlining, also all old behaviors are allowed (except the secuirty ones), etc.  Single-threaded versions of these inlinings all exist and are well documented.

    Multi-threaded is harder: general case isn’t solvable (probably because the monitors themselves become non-deterministic).  But if the monitor is race-free, (and some other easy properties), then you can do it.  Monitor is normally a state-machine.

     

     


    Cliff Click Talks About Azul

    My keynote went well enough; lots of Q&A from the senior people in the audience afterwards.  We had to stop the Q&A after running quite a bit over.  I also got a lot of positive comments afterwards, so I think people really enjoyed the talk.  I’ll add that I enjoyed both Simon Jones’ and Dave Ungars’ keynotes also.  This ECOOP was unusual in having 3 “gentleman hackers” give keynotes; we’re all three both very scholarly and very prolific programmers.  Doug Lea added in a later email:

    “Cliff: definitely the best talk I’ve seen you give (which is quite a few). Someday  you’ll have to figure out how you packed in so much technical content yet still fully captured attention of people supposedly without the background to understand most of it.” 

    High praise from Doug Lea, so I musta done something right.   🙂

    Trip home is uneventful, except for Paris’s Charles DeGaul airport being the usual zoo.  I get up at 4:30am and am in a Genova taxi by 5am for the 1/2hr drive to the airport.  Except that there’s no traffic at this hour and the driver thinks he’s Mario Andretti.  We make the trip in about 10 minutes.

    So I’m way early into Genova’s airport for a 7:10am flight.  No power connections for my laptop, for-pay wi-fi (not worth it), no shops are open – not even coffee.  No air-partnership; I cannot check in for my USAirways flight from the Air France desk in Genova.  I blog for the next hour and a half.  The flight leaves on-time and the two hours flying are very smooth.  Kudos to Air France.  Then we land at Charles DeGaul and the madness begins.  We have to walk down the plane stairs and across the tarmac and about 1/4 mile more to the main terminal – but actually it’s the small-plane terminal.  After figuring out I’m not in Terminal’s 1, 2 OR 3, I get instructions to take bus Number 3 to the automatic train and take it to Terminal 1.  Of course the buses & bus stop are labeled- with LOTS and LOTS of numbers and plenty of French – takes a question of the drivers to figure out which is the #3 bus.

    The bus ride is longish (for an airport bus) and uneventful and then I find the train.  Thankfully the train is clearly marked and again a longer ride to Terminal 1 than I expect: Charles DeGaul is a really spread out airport.  Terminal 1 is a total zoo; I see lines of people snaking out all over filling the large hall, hundreds of people long.  Three times I ask people about which line they are in; none of the lines are marked and all snake back and forth through out the hall; there’s a steady stream of people traffic cutting through all the lines and the lines branch and rejoin helter-skelter.  Finally I find the ‘entrance’ to the line for flight #771 to Charlotte – just next to the line for Philly which clearly orbits the entire hall.  The Charlotte line is actually fairly short, maybe 20 people long, and after 10 minutes I’m talking to the agent.  No problem checking in and then I head off for gate 55 (cutting through the Philly line again).  Then it’s customs (another 10 minute line), and then a small shopping zone – I figure I’ll come back to it for a snack (no breakfast yes; nothing was open in Genova).  But then I discover there’s yet another line for security.  This one is about 20mins long and no way I’m going back for my snack and then back through security.  I figure it’s like US airports – once past security there will be food and such… WRONG!  It’s just the end of the lines; I’m in a seating area with about 10 gates and absolutely no way to get a drink or snack – or bathroom near as I can tell. It’s now 10:10pm and it’s taken me a solid hour to navigate Charles DeGaul to find my gate.  As I type, I’m standing by a laptop charging station (no chairs nearby), in the hopes I can do a little something on the laptop during the next 8 hrs of flight.

    Machine model: objects, msgs, delegation.  High lvl objects represented as at least 2 low-level objects; a “proxy” and a real object.  Indirection to allow message interception.  All msg-sends get forwarded thru the proxy (“message” here is a java-lvl function call, not an OpenMPI msg – but could be either in a distributed system).

     

    Later: flight is fairly smooth. They serve some meals; I have another flight through Charlotte, NC to San Francisco.  Home at last!  For about 24 hrs that is, then I’m on vacation in Texas with my entire family.  More plane flights  for me!  Yumm, I can already taste the small salty snacks they are going to serve…

     

     

     

     

     

     

     

    JavaOne Slides

    Slides for my 2009 JavaOne talks:

     

    Alternative Languages on the JVM

     

    This Is Not Your Father’s Von Neumann Machine; How Modern Architecture Impacts Your Java Apps

     

    The Art of (Java) Benchmarking

     

    I’d like to say more on JavaOne, but I’m (re)discovering that if you’re a speaker at JavaOne it’s really hard to get into the other talks & technology being presented.  I had my head full with my 3 talks, making sure they looked good, were relevant, etc.  To be sure, there were plenty of talks I wanted to attend, but I never seemed to find the time.   🙁 

     

    And of course shortly after JavaOne, I went to 2009 ECOOP in Italy for a week, and now I’m on a long deserved vacation.

     

    More blogging soon, on ECOOP at least,

     

     

    Cliff

     


    Comments

     

     

    Hi Cliff

    I saw the slides for your benchmarking talk a few weeks back (I think from the JavaOne presentations site)–one thing that I missed was what you, as longtime JVM engineer, use to benchmark, e.g. what benchmarks you trust and which lead you to believe you have improved the code, are running as fast as/faster than another language, etc. Mostly you covered problems with existing, public benchmarks, but I’m sure you must have some way to verify your own expectations. Maybe you could blog about this at some point.

    Thanks for posting the presentations.

    Regards
    Patrick

    Posted by: Patrick | Jul 21, 2009 9:36:56 AM

     


     

    STM is not 15 years old — the first STM systems appeared in 2005 — off by a factor of 3 :-).

    Posted by: Maurice Herlihy | Jul 21, 2009 6:03:30 PM

     

     

     

    What about this one from 1995?

     

    Nir Shavit and Dan Touitou. Software Transactional Memory. Proceedings of the 14th ACM Symposium on Principles of Distributed Computing, pp.204–213. August 1995.

    Cliff

    Posted by: Cliff Click | Jul 21, 2009 6:33:30 PM

     


     

    Ah, but the 1995 Shavit & Touitou STM was a design only; there was no implementation. It was a PODC (theory) paper, intended to show it was *possible* to implement TM entirely in software (which seems obvious now but wasn’t then). The actual algorithm required predeclaring write sets, and had other issues that did not make it a promising candidate for implementation. AFAIK, the first actual STM implementation was DSTM in 2005.

    Even HTM received little or no attention until after 2000. I recently prepared a graph of citation counts for a talk, and turns out 2000 was the first year the 1993 TM paper broke through the 10 citation per year barrier. (It didn’t break 100 per year until 2006.)

    So TM, both hardware and software, is much less mature than you would think from the standard citations. If we had written the 1993 paper in 2001, not much would have changed.

    Anyway, I am enjoying your slides and reports: carry on!

    Maurice

    Posted by: Maurice Herlihy | Jul 21, 2009 9:34:54 PM

     

     

    Good read. Could you elaborate on the following quotes?

    * Fixnums: “Setting final fields cost a memory fence”
    * Lessons: “Language/bytecode mismatch – can’t assume, e.g. Uber GC or Uber Escape Analysis, or subtle final-field semantics”

    Thanks.

    Posted by: Nils | Jul 23, 2009 6:36:02 AM

     

     


    * Fixnums: “Setting final fields cost a memory fence” – 
    – Fun corner case of the JMM. *During* construction, final fields are not really final, as they move from zero to some other value. If you publish a half-constructed object globally other threads are allowed to see either the zero or the final value (actually, I think all bets are off). Immediately after construction, remote threads are only allowed to see the final value. This requires st/st ordering on the writer side and ld/ld ordering on the reader side. On the uber-strong X86 memory model, the st/st ordering can be a nop (I think – but Caveat Emptor, I’ve not looked at the X86 JMM issues in a while, might need an SFENCE). And on the reader side, the loads are pointer-dependent and perhaps only on the Alpha would you need a fence there.

     

    * Lessons: “Language/bytecode mismatch – can’t assume, e.g. Uber GC or Uber Escape Analysis, or subtle final-field semantics”
    – Mostly pointing out that the endless use of Fixnums instead of primitive e.g. “ints” has a serious cost that only goes away with some kind of Escape Analysis. This becomes a language design issue: what’s the value of the illusion of infinite-precision integers if all programs slow down by 2x (or more!). Since math beyond 64 bits is rarely needed, does it make more sense to require programmers to specify when they need more than 64 bits (and speed up all programs by 2x)?

    It’s a chicken-and-egg type problem. No important programs churn Integers, so nobody tries to optimize them. Do you dive in an try to make an ‘egg’ and hope a JVM ‘chicken’ will appear that Escape-Analysis’s away the 90% cost?

    Cliff

    Posted by: Cliff Click | Jul 23, 2009 7:21:15 AM

     


    Just a few cents.. I am less aware of specific pitfalls that relate to Java, but tend to be more concerned with the more generic ones. Maybe some of this may help,,,

    * First, how many tasks are you running and is that number greater than or close to the number of cores in the system,,
    * how many threads execute periodicly and do they re-run on the same core,
    * are you dirtying a large number of pages at once (bunched up) that delays near term allocations due to needed cleaning of the pages?,,
    * are your sub-page allocations grow the slab (or other) to the point that you are stealing from the back-end?,
    if you have a large number of threads doing sys calls, are they hitting a reader/writer lock and not grouping the readers together to get rid of the default FIFO nature of this lock,
    if you are doing a large number of I/O ops, are they async in nature?,
    are their bottleneck threads that the main threads are waiting on?,
    does your periodic threads change execution times if the same workload is presented to them? why?,
    If you find that one set of structs need periodic freeing, what happens if you don’t free, would your thread be faster?, then maybe a local memory pool would shave time off,
    What is taking the longest..

    There was once a saying that said if a process was EQUALLY split into 2 threads and one thread went Infinitely faster, you would only halve your execution time..

    Posted by: Mitchell Erblich | Aug 4, 2009 2:28:43 AM

     

     

    ‘m not sure which slides you are referring too! 🙂

    * First, how many tasks are you running and is that number greater than or close to the number of cores in the system,
    — We’re finding that other bottlenecks than CPU tend to dominate on most jobs; for Azul this would require running >800 threads.

    * how many threads execute periodicly and do they re-run on the same core,
    — This kind of optimization is very sensitive to the cache size and thread/job size and re-run duty cycle. For smaller caches & larger jobs, there’s nothing to be gained here. For well understood and tightly controlled jobs there’s a fair bit to be gained.

    * are you dirtying a large number of pages at once (bunched up) that delays near term allocations due to needed cleaning of the pages?,
    * are your sub-page allocations grow the slab (or other) to the point that you are stealing from the back-end?,
    * If you find that one set of structs need periodic freeing, what happens if you don’t free, would your thread be faster?, then maybe a local memory pool would shave time off,
    — GC removes all these concerns.

    * if you have a large number of threads doing sys calls, are they hitting a reader/writer lock and not grouping the readers together to get rid of the default FIFO nature of this lock,
    — No: Java’s default r/w lock bunches readers just fine.

    * if you are doing a large number of I/O ops, are they async in nature?,
    — You can get nearly the same effect as async i/o by running lots of I/O ops on lots of different threads. This is a common programming idiom in Java.

    * are their bottleneck threads that the main threads are waiting on?,
    * does your periodic threads change execution times if the same workload is presented to them? why?,
    * What is taking the longest…
    * There was once a saying that said if a process was EQUALLY split into 2 threads and one thread went Infinitely faster, you would only halve your execution time..
    — I think you are referring to Amdahl’s Law.

    Cliff

    Posted by: Cliff Click | Aug 4, 2009 8:48:09 AM

     


     

    Cliff-

    I just watched an online video of the “This Is Not Your Father’s Von Neumann Machine” talk you gave at the Sun 2009 JVM Language Summit (http://www.infoq.com/presentations/click-crash-course-modern-hardware ). Excellent talk. Thanks for sharing it. The slides appear to be the same as the JavaOne talk of the same name linked above, which is why I’m commenting here, even though I didn’t see the JavaOne talk.

    I have a question about the “real chips reorder stuff” example you gave. The slides don’t list which architecture(s) the example applies to, but in the video I think you referred to x86. I’m trying to reconcile this with my understanding of the x86 memory model, which I thought would prevent the kind of reordering you describe, at least under normal circumstances (i.e. using write-back memory and no fancy instructions).

    I went back to the “Intel 64 and IA-32 Architectures Software Developer’s Manual Volume 3A” ( http://www.intel.com/Assets/PDF/manual/253668.pdf ) and found section 8.2.3.2, titled “Neither Loads Nor Stores Are Reordered with Like Operations” (page 323). This example, at least as I read it, is a simplified version of the one presented in your talk, but it reaches the opposite conclusion: that you are guaranteed to never get into a state where the second write done by CPU #0 is seen by CPU #1 but not the first.

    Am I misreading the Intel document, or does your example only apply to other chips with weaker consistency guarantees, like the ia64?

    Thanks!

    Posted by: Dave Clausen | Jan 15, 2010 4:34:06 PM

     


     

    X86 has a very strong hardware memory model, but it still allows younger loads to bypass older stores. You get other racey behaviors than what I demo’d; I stuck with an easy-to-understand example. That particular re-ordering in the slides is possible on IA64, Alpha, & Azul gear, plus a bunch of higher-end DSP-like things.

    Keeping the strong X86 memory model costs Intel something; if they dropped the strong ordering they totally could make a faster X86 – but people would then have to deal with the O-O-O’ness of it.

     

    Cliff

     

     

     

     

    IFIP WG2.4 Trip Report

    IFIP WG2.4 remains one of my favorite mini-conferences to attend.  The group is eclectic & varied; the discussion vigorous. Everybody is willing to interrupt the speaker & ask questions.  The talks are usually lively and pointed, and this time was no exception.


    The conference was held in Fort Worden, a historic Big-Gun (planes rapidly obsoleted the guns) fort of the Pacific Northwest, defending Admiralty Inlet and our shores from the marauding Russians … or Japanese… or somebody, surely.  Nobody ever attacked that way, so clearly the fort was a big success.

    Meanwhile, the fort has been turned into a park and it’s beautiful.  The whitewashed buildings with green trim remind me of some mythical 50’s era America that never was, the kind that shows up in the backdrop of various movies or maybe “Leave It To Beaver“.  We stayed in the officers quarters and held our meeting in the old mess hall.  The officers lived quite nicely; the quarters are actually old duplexes in great shape, with high ceilings and crown molding and carved brass fittings everywhere – the house is clearly larger than my house and with a vast green lawn ta-boot.  In San Jose such a place would be upwards of $2m…

    Fort Worden overlooks Admiralty Inlet from some high bluffs (classic fort-like location) and the surroundings are all gorgeous.  The weather varied from overcast & chilly to clear & crisp (alas, overcast and chilly was the mode on our all-day “whale” watching expedition.  We did lots of watching but saw no
    whales of any kind.  And we bounced over heavy seas and got sprayed with frigid water most of the day.)

    Since the Hood Canal bridge was out I got treated to a 3 hr drive the long way around the peninsula, but somebody else drove and the scenery was worth looking at.  The airplane portion of the trip was easy.

    As usual, my reports are (very) brief, stream-of-consciousness writing.  I often skip around, or brutally summarize (or brutally criticize – but this group is above the usual conference par, so less of that is needed).  I skip talks; I sleep through ones I’m not interested in, etc, etc.  Caveat Emptor.

    Synchronization Talks
    Atomic Sets – another nice concurrency alternative
    Fork/Join for .Net
    occam / CSP – Uber lite-weight user-thread scheduling

    Java Analysis Talks
    SnuggleBug – Better bug reporting for Java
    SATIrE – A complete deep analysis toolkit for Java
    PointsTo – A comparison of various points-to algorithms for Java
    analyzing JavaScript – Has a type-lattice as complex as C2’s internal lattice
    Performance variations in JVMs – Noticed the wild variations in run-to-run performance of JVMs

    Misc
    Cheap Cores discussion
    Compiler-directed FPGA variations
    Clone Wars – huge study on cut-n-paste code in large projects
    Business Process Modeling

    Security
    PDF Attack – “Adobe Reader is the new Internet Explorer”
    The Britney Spears Problem
    Squeezing your Trusted Base
    Who you gonna call?

    Evolving Systems
    Swarms in Space
    Finding emergent behavior


    Frank Tip, Type-Based Data-Centric Synchronization

    Locks are hard.  Auto-inference of correct sync in O-O programs.
    Even with no data-races still have atomicity races.

    Instead of code-centric locking paradigms, do data-centric locking.
    Tied to classes with fields, leverage O-O structures

    Group a subset of fields in a class into a *atomic_set*.
    Then find units_of_work for that atomic_set.

    Add language features to Java:
      atomicset account;
      atomic(account) int checking;
      …
      atomicset logging;
      atomic(logging) Log log;
      atomic(logging) int logCount;

    So add ‘atomicset’ keyword, ‘atomic’ keyword and strongly type variables with atomic_sets.  Units_of_work expand to reach the syntactic borders anytime an atomic variable is mentioned.  Can expand unit_of_work by hand also.

    Can at runtime cause atomic_sets to alias – to union together.  Used on ownership types (e.g. HashMap uses HashMap$Entry, so units_of_work on HashMap or HashMap$Entry become units_of_work for both).

    Atomic_sets can be unioned; e.g. for LinkList the entire backbone of the list is one atomicset.  Lots of discussion – points out the issues with hacking all the Code Out There.

    Has proven strong serializable properties on atomicsets.  Proven various kinds of soundness, including *internal* objects are only accessed by their owner, etc.

    All concurrency annotations appear in the libraries but not in the client code – shows a nice concurrent example with 2 threads manipulating the same linked list.   

    Has a working version using Eclipse & taking annotations as Java comments.  Using j.u.c.Locks; can do lock aliasing by simply sharing j.u.c Locks.  Applied this to a bunch of Java Collections; 63 types & 10KLOC of code.  Needs about 1 line of annotation per 21 lines of code for the libraries (and none in
    clients)

    About 15-30% slower than using plain sync wrappers on a single-threaded testcase; probably because the “synchronization” keyword is very optimized.  Performance is similar or slightly faster when running with 10 threads & high contention.


    Daan Leijen – The Task Parallel Library for .Net

    – A concurrent parallel library for .Net (think dl’s Fork/Join).

    Infrastructure: locks/threads/STM, async agents (responsiveness), task parallelism (performance)

    Gives a demo of a parallel matrix-multiply w/C#.  Very ugly code.

    So instead, write a nice combinator…  

    Standard task-stealing work-queue approach.  (private set of tasks that don’t need CAS, public set of steal-able tasks that require CAS).

    Points out that want fine-grained parallelism, so can do work-stealing and do auto-load-balancing.  

    — Effectful state.  Statically typed, with typed side-effects.  Type signature includes all side-effects (yes!)

      int fib(int n) { return n<=1 ? 1 : fib(n-1)+fib(n-2); }

    But this program

      int fib(int n) { return n<=1 ? print”hi”,1 : fib(n-1)+fib(n-2); }

    Does not type in Haskell – need the IO monad.  And now the function returns 2 results: the IO “hi” and the int.

    So want some syntax to allow “auto-lifting” into a side-effect world – same function syntax, but auto-includes the IO monad as a result.

    Then can prove that some stateful computation does not leave the function – that the state is entirely encapsulated & unobservable outside.  So can do type’ing which allows some state-ful work internally, but it doesn’t escape the function “box”.

      int fib(int n) { 
        ref f1 = 1; // side effects, but entirely internal to the function
        ref f2 = 1;
        while( n ) { sum = *f1 + *f2; *f1 = *f2; *f2 = sum; n–; }
        return sum;
      }


    Peter Welch, Multicore Scheduling for Lightweight Communicating Processes

    Google: kroc install

    Carl Ritson did all the work…

    Cliff Notes: Insanely cheap process scheduling, plus “free” load balancing, plus “free” communicating-process affinity.  Very similar to work-stealing in GC marking, but with “threads”.  OS guys take note!

    Scheduler for OCCAM/PI.  Process-oriented computation (millions), very fine grained processors.  Message passing over channels, plus barriers.  Uses CSP and pi-calculus for reasoning.  Dynamic creation & deletion of processes & channels.

    Goal: automatic exploitation of shared-memory multis.  “Nothing” for the programmer to do: the program exposes tons of parallelism.  “Nothing” for the compiler to do – it’s all runtime scheduling.  

    Goals: scalable performance from 10’s of processes to millions; co-locate communicating processes – maximize cache, minimize atomics (lock-free & wait-free where possible).  Heuristics to balance spreading processes around on spare CPUs and co-locating the communicating ones.  

    Usual sales job for CSP… which still looks good.  

    A blocked process requires only *8* words of state (no stack for you!). Scheduling is cooperative not preemptive.  No complex processor state, no register dumps.  Up to 100M process contexts per second.  Typical context switch is 10ns to 100ns.  Performance heavily depends on getting good cache behavior.  So processes are batched.  Stacks are spaghetti-stack (no that’s why no stack saving…).  These 8 words include the process run-queues.

    Batching: sets of e.g. 48 processes are run on a single core; that core is the only core touching this queue & round-robins them until the entire batch times out.  So no cache-misses.  A Batch is a Process, plus a run-queue.

    A Core is a Batch, plus a queue of Batches.  Again, no other cores touch this list either.  

    A Core with no Batches does work-stealing.  There’s some proposed batches available for work-stealing from each Core.  Cores need some atomic ops to manipulate the queue of Batches for these steal-able Batches.  Careful design to ensure all the interesting structures fit in a single cache line.

    So until you need to enqueue, dequeue or steal a Batch – it’s all done without any contention for process scheduling.  These Batch-switching operations require atomics but are otherwise fairly rare.  

    Key bit missing: scaling up beyond 8 or 16 cores with some kind of log structure.  But otherwise looks really good for very low cost context switch & work steal.  Lots of questions about fairness.

    Batches are formed & split by the runtime (no compiler or programmer).  Aim is to keep processes that communicating on the same core (so it’s all cache-hit communication), but spread the work across cores.  Initial batches are kept small to encourage spreading.  Channel ops require atomics, but typically only 1 (sometimes 2).  Choosing amongst many channels might require up to 4 atomic ops.  Each time communication happens on a channel, the sleeping process is pulled into the run queue Batch of the awake process – thus naturally grouping communicating processes.  But as batches get large, need to split.  

    Periodically can observe that if all the processes in a Batch are all communicating with each other in some complex cycle – then the spare run queue empties from time-to-time (with one runnable process).  So as long as the run queue empties now & then, then the Batch should stay together.  But if the run queue never empties, then split-the-Batch.  Split off a single process into a new Batch – which will drag it’s connected component together into the new Batch.  But eventually if the original Batch had 2 complete strong graphs,
    they will now be split – and can be stolen onto other CPUs.  

    So this kernel is about 100x less overhead than the same design done on a JVM.
    🙂


    Satish Chandra, Symbolic Analysis for Java

    “Snugglebug”

    Communication between the programmer & the tool tends to stop early: “Dear Programmer: you have a null-ptr bug here”…

    Going to the next step: “Dear Mr Programmer: try running your method with *these*values and you’ll see the bug…”.  Programmer:  now that I think about it, these values will never arise.

    We just forced the programmer to define the legitmite range of values.

    More: Bug Report is more believable if given a concrete input is given.  
    API hardening: gives the preconditions under which this library will throw an exception
    Unit-test generation: inputs needed to make a specific assert fail.

    Works by weakest-precondition.  Given an assert in the program, work backwards computing the weakest-precondition at each point until you reach the top of the method.  If this chain works, then hand the WP to a decision procedure; sometime the decision is “I don’t know” but many times you get an explicit counter example.

    Good match for the counter-example problem.
    – Yes loops are a problem, but don’t really need the *weakest* precondition.

    Most Java features can modeled:
    – heap r/w, arrays, allocation, subtyping, virtual dispatch
    – exceptions: many bugs are here so we modeled exception handling try/catch
      very closely.

    Have a path-explosion problem.  For heap updates, even a single path includes branch effectively depending on aliasing.

    Has to do strong interprocedural analysis.  Lazily grow the call-graph consistent with known symbolic constraints.  First look for a conflict on a call-free path, then on call-paths but without needing to look into the call.  Finally, if I must peek into the call – but use the symbolic info the reduce the set of possible call targets.

    Generally, no fixed-point.  Cannot expect programs to give loop invariants.
    Work-list management: avoid various common pathologies.
    Goal is to find *a* contradiction, not to explore all options.


    SATIrE

    http://www.complang.tuwien.ac.at/satire/

    Shape analysis, points-to, flow-sensitive, context-sensitive, annotations, can read them in & out.  Applications to e.g., slicing.

    Read C/C++ – large complex tool chain.  Reads in program x-forms in either a functional programming language or in Prolog.  Can combine the tools in any order at this point (very well integrated).  Output is C/C++ w/annotations at each step (or internal “ROSE” ASTrees, etc).  

    Has interval analysis for integers, shape analysis, loop (bounds) analysis, points-to, etc.  Gives example of reaching-def’s in the functional programming language.  It’s a domain-specific language for writing compiler analyzes.  Can specify the strength of the analyzes in many cases: e.g. the depth of the allowed call-chain for context-senstive (set to zero for context insensitive).

    Shape analysis gets may/must alias pairs, etc – and all results put back into the C program as comments/annotations.

    Basically describes a large complex tool chain for doing pointer analysis on C & C++ programs.  The tool chain looks very complete & robust.


    Welf Lowe – Comparing, Combining & Improving Points-to Analysis

    …skips explaining what points-to is.

    Clients: compilers, JIT compilers, refactoring tools, software comprehension.
    Different clients have different needs: speed, precision, security.
    Granularity & conversatism also matter.  Clients might want, e.g., dead code elimination, or removing polymorphic calls, or rename methods & calls.  Other people want object granularity – escape info & side-effects (i.e., compilers and JITs).  Also static-vs-dynamic analysis.  Dynamic analyzes typically run the program and produce optimistic results.

    Doing careful experiments of 2 analysis w.r.t. accuracy.  Hard part: can’t get a perfect “gold standard” for real programs.  Hard even for a human to inspect a proposed “gold standard” and declare it “gold” or not.  Some special cases are easier: conservative analysis (no false positives) & optimistic analysis
    (no false negatives).  Can under- and over-estimate the gold standard, but this still messes with the results (messes with detecting which of the 2 analysis is better w.r.t. accuracy).

    E.g., ask the question: is it worth increasing k>1 (i.e., becoming context sensitive vs staying insensitive).  Checking size of computed call-graph – smaller graph is more precise.  Very small increase in accuracy.  Sometimes made things worse – because the original analysis was optimistic sometimes.

    Open question: what is the use of an Uber Points-To – it’s definitely beyond the point of diminishing returns for JIT compilers.


    Anders Moller – Type Analysis for JavaScript

    Not much tool support for catching errors during programming.
    No nice IDE support.  Many bugs are browser specific, but we focus on language bugs.

    Most JS programs are quite small, so throw a heavy-weight analysis at it.

    Object based, dynamic typed, proto-type inheritance, 1st-class functions, coercions, dynamic code construction, exceptions, plus 161 built-in functions.  Types include null & undefined, primitive types, some properties “ReadOnly” and “DontDelete”.

    Broken things: tracking down an “undefined” and where it came from, reading an absent variable or absent property, etc.

    So do a dataflow analysis over the program, including a lattice, transfer functions, etc.  Handle interprocedural stuff.  Tightly interleaved control flow & data – so need a complex lattice.  

    Lattice is roughly as complex as C2’s lattice, with more stuff on the allocation sites and less on the class hierarchy.  Also then model all the abstract machine state (C2 does this as well).  Full context info (total inlining at analysis time except for recursion).

    Cliff Notes: Nice idea: for each allocation site, model both the *most recent object*from that allocation site, and also model *all the rest* from that site.  You can do strong-update on the singleton most-recent object, but weak update on all the rest.


    Rhodes Brown – Measuring the Performance of Adaptive Virtual Machines

    Noticing the badness of the “compile plan” – 1 run in 10 had a >10% performance reduction.

    JIKES – always compiled, no interpreter.  On demand at 1st invoke.  Focused recompilation, lots of GC variants.

    Points out the places where you get variations – stack-profiles for estimating the call-graph, basic-block counters, etc.

    Causes of variation?
    So far: GC & inlining
    Concerned that i-cache placement is a major issue of instability.

    *No* correlation between total amount of compile-time spent vs score.  You must compile the hot stuff, but apparently tons of not-so-hot code is getting compiled and it doesn’t help the score any.


    Kurt William – lots of cores for cheap

    General discussion round, not really a “talk”.

    Expect >32 core/die in 5 yr, >128 core/die in 10 yrs.
    What’s “New” and what can we expect in 5 years?

    Theory: contention is that there’s nothing new here since the 70’s & 80’s (concerning concurrency).  (general agreement on incremental progress but not breakthrough).

    Systems: Lots of libraries (TBB, pthreads), lots of platforms (webservers), all OS’s support multi-core.  JVMs are New in this space.  Might see OS-support for user-level thread scheduling.

    Languages: Support been there, but it’s locks & threads – no good.  Support for TM is coming, but will it help?  Then there’s CSP…

    Applications: Not much new here…. more games, more interactive more user-interface.  Old stuff: lots of HPC, speech, image.  Matching: easy parallelism; lots of interesting apps here.


    Uwe Kastens, Compiler-Driven Dynamic Reconfiguration of Arch Variants

    What is “reconfigurable”?  “Usual” approach – HW guys compile to a netlist, do some mapping, place & route, make a FGPA/silicon, while the SW guys write some assembler on the FPGA – then at the end they try to run the SW on the HW.  Goal is to run some complex function faster on a FPGA than in normal ops.

    Our approach: read source code, look at program structure, find hot code & hot loops; compiler knows how to switch the hardware between variants; then it compiles the code for a particular FPGA variant for each loop, and reloads the FPGA with a new function between loops.  But need to limit the FPGA to certain variants so don’t actually need to reload the whole FPGA.  

    Fixed small set of architecture variants.  Keeps cost to reconfigure the FPGA very low.  Compiler can choose the variants.  Example: FPGA runs either a small CPU or a small SIMD CPU or a MIMD CPU.  Or reconfigure the pipeline structure, or the default register bank access patterns.  In the different configurations can use more or less power for problems that are more or less parallel.  e.g., in SIMD mode have all ALUs active but turn all but 1 decoder.

    Pick variants & design space to those known to be efficiently solved by a compiler – gets a fast compile time & good results there (use a human elsewhere).

    Can map any arch register to any physical register – so can e.g. have CPUs share registers, or do unbalanced register allocation – if some CPU needs more registers in it’s portion of code than others.  Does some register allocation affinity games, to avoid having to reconfigure register layouts between blocks of code.  

    Nicely flexible overall design.  Instructions name arch registers & ops, but also can re-map real registers & ops periodically.  So use targeted registers & ops for a particular chunk of code, then reconfigure when the next chunk of code is different enough from the last chunk- and the cost to reconfigure is lower than running with a less-well-targeted registers & instructions.

    Ugh, can’t read data charts – has some 4 embedded programs.  Seeing 30% reduction in execution time AND power cost, by reconfiguring.  Code size is a little larger, because either poor parallization(?) & the reconfiguration overhead.


    Jim – Clone Detection

    Exact clone’d code, white-space only changes, near-miss clones.
    Bug in 1 deserves inspection in all the rest.

    Lots of cloning studies out there -esecpially in-depth of the linux file system.  But no wide-range-of-systems studies.  We did a bunch of open-source systems, C, Java, C#.  OS, Web apps, DBs, etc.  Used their own clone-detection, as a combo of AST-detection and text-based detection.

    Text-based comparisons are sensitive to formating changes.  But AST parsing is expensive (requires tree compares) and does not find “near miss” clones.  Ended up doing “standard pretty-printing” – which uses the AST to normalize the text.  Then do text-based comparisons.  Also handle small diffs in the middle of clones.

    Did 24 systems, 10 C, 7 Java, 7 C#.  Linux 2.6 kernel, all of j2sdk-swing, db4.  Varied from 4KLOC to 6MLOC.

    Java methods have more cloning than C methods – 15%-25% (depending all the diff threshold) of Java methods are clones; C varies from 3% to 13%.  C# is more like Java, until the methods allow more diffs – and then there is a lot more cloning.  Swing is hugely clone-ish, >70%.

    C systems – Linux has the same cloning behavior as postgres & bison & wget & httpd. Java has no more clones as you allow more diffs as compared to C – I suspect good Java refactoring tools detect & remove clones earlier.

    C# shows a LOT more clones as more diff’s are allowed.  Maybe cut-n-paste is a coding style?  C#’s clones are larger methods as well.

    Something like 10-20% of all files in Java have 50% of functions are clones.  In C# it’s more like 20-35% of all files.

    Files with 100% cloned code: 15% of C# files are totally cloned.  For Java whole file cloning is fairly low.

    Localization of cloned pairs: are they in the same file?  same directory?  different directories?


    Thomas Gschwind – Business Process Modeling Accelerators

    Business Process Modeling, patterns, x-forms, refactorings
    Process Structure Tree – 

     – Process is a large unstructured graph.  Visio-style graphical is error prone for large process models.

    But can take the structure graph and build a tree from it.
    (The graph is all fork/join parallelism).
    A small part of the tree requires state-space exploration.

    After the tree is correct, can apply patterns & Xforms to the tree.

    Essentially Petri Net edit’ing on a structured graph/tree using Eclipse.

    100’s of users inside of IBM; generated business models are much higher quality, make them 2x faster.


    Philip Levy, The New Age of Software Attacks

    Why? 

    • Not enuf to do?
    • Crime is Big Business
    • Usable exploit: can be sold for $75K to $100K
    • So called “security researchers”.  Some are respectable, most are going after the $100K payoff.


    “Visual Virus Studio” – actually exists.  PDF is a common vector for *targeted*attacks.  “Adobe Reader is the new Internet Explorer”.

    Looked at all the fixed vulnerabilities in Adobe Reader

    • Need to do a better job teaching students on how to write secure software
    • Stop writing in assembly language
      • ANY unmanaged code, include C/C++, should be banned

    • Languages need to be upgraded to be more secure
      • New constructs & type systems

    • More sophisticated testing tools & methodologies
    • Better analysis tools for legacy systems


    How big is Adobe Reader?

    • 100K FILES
    • 42MLOC, no comments or blank lines: 25MLOC
    • Estimated 75 man years of development per year over it’s lifetime


    Problems are clustered – e.g. one module has had 25% of all security problems.
    Top 6 modules have 80% of security problems.

    Break down problems another way:

    • array index OOB – 32%
    • unknown – 27%
    • intege overflow – 14%
    • range check missing – 8%
    • capability check missing – 4%
    • out of memory 3%


    How found:

    • Fuzzing – 42%  (creating illegal inputs and throwing them at the program)
    • Code Review – 35%
    • External report – 12%


    Some examples:

    • asserting against null, but no asserts in debug build and null possible in release build.
    • Failed to check for error returns, sometimes even not catching return value
    • Casting “attacker controlled values” (values from the PDF file) to valid enum
    • Or using attacker-controlled value for loop & array bounds
    • Sometimes a complex pre-condition needs to be checked first
    • Some of the fixes are still incorrect


    A Quick List:

    • Badly designed lib funcs (strcat, sprintf)
      • Microsoft’s banned API list is 4 pages

    • AIOOB
    • Calling thru func ptrs
    • Integer over/underflow.  Many protocols send N followed by N bytes.  Send a really big number that overflows (after scaling for malloc) into a small number – then malloc succeeds, but the data overwrites the buffer.
    • Pointer math
    • Uninitialized vars
    • Dangling ref pointer

    Basically: it’s BASIC STUFF that goes bad, not fancy stuff.

    All memory integrity or soundness type stuff.
    But it goes beyond AIOOB – 
    e.g. Name string like “Robert); DROP TABLE Students;– “


    Jeff Fischer, Solving the “Britney Spears Problem”

    Users assigned roles; roles are assigned privileges. 
    Benefit: users can come & go without changing policies.

    “Patients” can view medical records, but “doctors” can change them.

    Can add annotations to Java that specify roles.

    Problem: lack of expressiveness; solution: logging.  Violation happened (people saw Britney Spears’ data), but the logging allowed catching the violators.  Another solution: manual checks.  But no compiler support, so missed checks.

    Problem: Lack of explicit access policies.
    Really: need a way to “type” data (and hence methods using the data) with some kind of security/access policy.  More problems: often checks are hoisted to entry point for efficiency, and then violate policy later.  Relying on manually-inserted runtime checking.

    Our stuff: parameterize classes by some kind of index value defining Role. 
    Each method includes an explicit policy (statically checked).
    Annotation processor can either statically type-check the policy, or insert runtime checks.


    Jens Knoop – Squeeze your Trusted Annotation Base

    “Interesting program properties are undecidable!”

    Compiler optimization – not so worried: just lose some optimality.  Results still usable; but if we have better results we can produce better code.

    Worse-Case execution time analysis for hard real time: Bang.  Your dead.
    WCET bounds must be safe (soundness), and wish to be tight (optimal or useful).
    E.g., a bound of 1hr for a mpeg3 audio-player interrupt is safe, but useless.
    Need, e.g., all loop bounds.  
    Need user annotations.
    User annotations are untrustworthy, but must be trusted.

    Squeeze the trusted annotation base – reduce user chance of screwing up.

    Try to replace trust by proof – as a side-effect generally improving optimality.  Then take advantage of trade-off between power & performance of analysis  algorithms.

    Use model-checking with care.  If something can’t be bounded automatically, as
    the user for some bound – then prove it.  BUt also believe the user’s bound isn’t tight.  Try to prove a tighter bound with the model-checking using binary search.  Indeed – can do away with user assistance often; just guess and binary search to tighten it.

    Summary: can often completely empty the trusted annotation base, usually shrink it; often improve WCET bounds.

    Start with the cheapest techniques (constant prop), move up to interval analysis, then symbolic bounds & model checkers…


    Joe Newcomer – Who do you trust to run code on your computer?

    (Joe is a Windows attack expert)

    Attack Vectors – autoexec.bat, .cshrc & friends, autoplay, device-driver installs (Sony), mac resource attacks

    Engineered attacks: phishing attacks, downloads, activeX, client-side scripting

    Lots of “security” designs & codes are done by junior programmers w/out adequate oversight.  Pervasive attitude: “software is secure – as designed, as implemented, as maintained”.  Small+fast is all that matters…  not…

    Rant against all modern OS’s, talking about #of security patches in Mac, Linux, unix, Windows…

    Client-side scripting is vulnerable. Flash, Office, Adobe Reader, etc.

    What are we teaching people about security?  How about non-CS types? 
    Graphics designers, website designers…


    Stefan Jahnichen, Swarm Systems in Space

    Build very small satellites, swarms of them to watch the Earth.  They will spread out after launch to watch e.g. weather, bush fires, etc.  Takes about 2 days after launching to get a communication link.

    Satellites have good earth coverage; orbit at 500 to 600km, orbit earth every 3 hrs or so.  Wide range of sensors amongst different satellites.

    Have a Real Swarm of cameras; map a Virtual Swarm onto that; map mission goals on the V.S.  

    Optimized synthesis of consensus algorithms.

    Special OS – distributed OS basically.  Used to maintain swarm integrity (keep satellites together).  


    Peter Welch, Engineering Emergence…

    Thesis: in the future, systems will be to complex to design *explicitly*

    Instead engineer systems to have the desired properties “implicitly”.

    “mob programming”, ant mounds, etc…. simple rules of behavior leading to complex behaviors that emerge.  Emergent Behavior

    Mechanisms Design – (game theory), 

    • Rational actors have local private info
    • Emergent: optimal allocation of scarce resources
    • Optimal decisions rely on truth revelation


    Swarming behavior (flocks, etc)

    • local actors, local behaviors
    • Emergent “swarm” behavior
    • UAV swarms & autonomous robots


    Social communication (gossip, epidemic algorithms)

    • large, ad hoc networks
    • emergent: min power to achieve eventual consistency
    • low power, low reliability sensors & data propagation


    Design for emergent behavior

    • occam process per “location”, a 1024×1024 grid has 1mil processes
      • each location has a com channel to it’s 4(8) neighbors

    • occam process per “mobile agent”
      • agents register with local process (or up to 4 if it’s straddling a local region) & local process tells the agent what’s nearby.

    No idea how to design high-level behavior from low-level rules, but busy experimenting and looking for some cases.

    Cliff

     

     

    DeCapo Trip Report

    http://www.cs.tufts.edu/research/dacapo

    DaCapo is a small (but growing) research group focused on GC & Memory Management issues.  They meet about every 9 months, and the meeting bounces around depending on who is hosting it.  People mostly present work-in-progress, and the audience is up front with their critiques… often interrupting the speakers or getting sidetracked into loud off-topic conversations, i.e., it’s a fun boisterous crowd.

    DaCapo is in the Boston area this year, held at Tufts University’s Medford campus (everyone’s heard of Harvard and MIT, but how about the Berklee College of Architecture, Fisher College, Boston University, Suffolk University and dozens and dozens more? – Boston has more colleges and universities per square mile than any other place I know).  As usual for me traveling to Boston, I skipped the car and just used public transportation – the T works marvelously well (as compared to e.g. VTA; crossing the entire town from Logan Airport stuck way out in the water to Tufts some 8 miles away took 1 subway transfer and 1/2hr, while crossing San Jose on the VTA takes something like 2 hrs). 

    Anyways the weather was supposed to be highs of 65 and rainy with lows in the 40’s – so I packed my parka.  Turned out the weather report was a little bit off, with highs in the upper 80’s and sunny.  Man did I roast.  But I got out and enjoyed the sun – we (the entire DaCapo group of 40 or 50 people) walked about a mile to lunch each day, plus I walked to and from my hotel & the T stop (also about a mile).  Enough ruminating, on to the talks!

    As usual, I pay attention to some talks and chat w/neighbors through others.  I brutally summarize.  I’m opinionated & narrow focused… so Caveat Emptor.

    Change Tracking

    What changed?  Who did it?  When?  Points out the flaws of using ‘diff’.

    Logical-Structure diff.  Auto-magically spotted systematic change patterns.
    It’s a better code-review diff.

    Can report diff’s as reports like “for all XXX, replace XXX with YYY except for ZZZ_XXX_ZZZ”, etc.  Vastly better error reporting!!!  Vastly better diff reporting!!!

    Cliff Notes: We must get this gizmo and check it out!!!!

    Paper report: http://users.ece.utexas.edu/~miryung/Publications/icse09-LSdiff.KimNotkin.pdf
    Sample output: http://users.ece.utexas.edu/~miryung/LSDiff/carol429-430.htm

    Inferred Call Path Profiling

    Very low cost path profiling.  Hardware counters have very little cost, but provide little context.  Software is reversed: high context & high cost.
    Cliff Notes: Compare this to call/ret RPC hashing.
    Cliff Notes: Bond & McKinely does something similar.

    Instead, look at number of bytes from main() to leaf call on stack.  Basically PC & SP (offset from stack base) provides all the context we need.  Dynamically detect unique PC & SP (they did it in a training run) pairs, and reverse them to get a call-stack.

    Lots of duplicated stack heights, how to tell them apart?  On SpecInt unique about 2/3rds of time (2/3rds of PC/SP pairs map to a single call stack).  But huge spread; really sucks bad on some benchmarks.

    If have a list of entire call-graph, then can remove ambiguity by growing some frames.  But requires some global analysis to know which frames to grow in order to be unique.  “Solved” problem with a simple greedy search.  Compute all call-stacks over whole program run, pick a conflict, grow some frame, re-compute & keep if more PC/SP pairs are distinct.  Repeat until it’s rare enough that can live with dups.

    Cliff Notes: The whole-program ahead-of-time thing isn’t feasible for us.  How to do this on-the-fly during tick-profiling?  Hash PC/SP, find a tick record, 1-time-in-1024 check for dups (crawl stack the hard way and confirm call-stack), otherwise assume unique & increment tick record.  If MISS in hash table, then crawl stack the hard way.  If record is labeled as having dups, then crawl the call-stack & find true record?)  Actually, probably expect the call stacks to be common, but the PC’s to be fairly unique.  So really asking for unique RPC+SP (not PC+SP).

    His experiments: collect PC/SP pairs about 100 times/sec.  0.2% slowdown (must have had careful measurements to detect that!)

     

    Breadcrumbs – Recovering Efficiently Computed Dynamic Calling Contexts

    Also builds on probabilistic calling contexts. 

    Given a (nearly) unique ID (say, computed at each fcn entry via a simple hash on the return PC).  IN the past, reversing these unique ids: use a dictionary (very large & high cost), use a calling-context-tree (smaller, but requires you to track where you are in the CC-tree at runtime).  Forward search, but undecidable and limited options for pruning.

    Attempt to reverse the hash fcn (but only when we need to take the semi-unique id) and convert it to a call-stack.  Straight-up reverse is no good, but is easy to guide search.  Callers of leaf call themselves have structure – e.g. expect to see a call-op from the call-site, etc.  Still need a list of all call-sites (generally available anyways because of inline-cache structures).  His function is “p’ = (3p+c)”, where “c” is RPC?  Requires a handful of instructions on function entry.

    Cliff Notes: If this reverse search is successful most of the time, it says that during a profile tick we can record just the PC/SP (or RPC/SP/PC?) and then reverse them to the entire call-stack later – on demand when RTPM requests.  So a ‘tick’ is no more than record a few bits (and handle buffer overflow), but no stack crawl.

    Cliff Notes: combined nicely with ‘null-assignment-stack’ info, ie. collect the entire stack and ‘color’ all NULLs with a unique stack id.  Then if get a NPC can report both the NPC at the crash site, but also the call-stack that assigned the NULL in the first place.

     

    Transparent Clustering – Phil McGachey, Eliot Moss, Tony Hosking

    Attempt to auto-distribute multi-threaded programs.  Pure Java, no JVM hacks.  No programmer involvement.  Dynamic class re-writing.  JVMs can vary.  Hardware can vary.

    All machines on a network run RuggedJ nodes.  RuggedJ includes a rewriting class loader, some runtime libs, and some application-specific partioning strategy.

    Rewriter: generate lots of classes & interfaces; especially hook methods & fields.  Hook into the runtime.  Abstract over object location; remote & local objects.

    Ex: Standard class “X”.  Rewrite into a X_Local and X_Stub classes.  X objects on Node A are of type X_Local.  On Node B, X objects are really X_Stub objects where all the calls are remote calls.  Both X_Local and X_Stub implement the “X” interface.  Some other Y_Local can refer to X_Local on Node A, but refer to X_Stub on Node B.  If want to migrate from Node A to Node C, want to indirect the X_Locals; so for mobile objects there’s an indirection layer called “X_Proxy”, that then be switched from pointing to an X_Local vs an X_Stub (if the object migrates).  Non-mobile objects skip the X_Proxy indirection layer. 

    Inheritance & interfaces are maintained in the rewritten classes.  Static classes conform to the RuggedJ model (I assume replication is allowed?).  Arrays are also wrapped to support indirection. 

    Constraints: Native code breaks everything.  Try to not rewrite stuff referred to by native code (maybe can intercept all native calls and rewrite args?).  System code is also an issue: loaded by the system loader, is not rewritten.  (Can they use -Xbootclasspath?).  But most code run is actually system code!  (eg. String libs are very common!)

    So must deal with system code without changing the JVM. 
    Can wrap system class objects.  Must unwrap when passing to system or native code; must re-box on return.  Re-wrapping is expensive: need the exact same wrapper, so need some kind of hash-lookup to do re-wrapping.  High overhead, so avoided where possible.
    Extend: some classes can be extended and preserve the original class & signatures.  Extended class follows the RuggedJ model but also the original system model. No need to unwrap/re-wrap.  Does not work e.g. when system code generates the object (since sys code makes instances of the original and not of the extended superclass). 
    Some system classes are only ever referenced from user code; can just dup the code out of rt.jar and make a user-class (which now gets rewritten by the normal RuggedJ model). 
    Immutable classes.  Just replicate.

    Apparently able to cluster some complex programs.
    No idea about performance as compared to explicitly clustered apps.

     

    Jikes RVM Native Threads

    Was Green threads, now Native threads
    (everybody else is dropping green threads too….)
    Motherhood & Apple Pie.

     

    Computer Science as an Experimental Science

    Reproducibility?  Experimental bias?
    Show how these problems are solved in very-high-energy astrophysics.

    (1) No publish without multiple colleague confirmation.
    (2) Publish
    (3) MUST also now publish reproductions or non-reproductions’s of other peoples’ experiments (or lose credibility as a research entity).  But bar to publication is lower, and expectations are lower.  i.e., people expect to publish in-progress work

    Biggest comp-sci problem now: lack of infrastructures.
    Examples of infrastructure bias: green-vs-native threads, some barrier styles are hard to use (.Net embedded objects), stack-scan (OVM makes fast stack scanning at the cost of everything else being slower).  HotSpot (high-value compiler, good GC, good most everything) vs JIKES.

    Cliff Notes: This was actually a really fun talk (and lots of discussion) despite my paucity of notes. 

     

    Web 2.0 Profiling

    Do web 2.0 workloads differ from legacy, e.g. SpecJVM98, DaCapo suite.
    Relies on browser to run Flash, JavaScript, REST, AJAX.
    Benchmarks: IBM social networking software; Java PetStore 2.0, AJAX-based RIA using JEE5, uses Lucene search.
    Apply workload: viewBlog (from 9 blogs), createBookmark (total 14), selectTag (PetStore 9), etc. 
    Workload mix can be varied, but based on commonly observed usage pattersn from internal deployment of Lotus Connections).

    Wanted to remove DB from the workload, put DB on ramdisk.  Same for LDAP.  Used Grinder to generate load.  Main server running J9/WebSphere/Lotus and also Sun Glassfish/Petstore both on 2-way power5 1.66GHz.  Fairly reasonable setup for a small web/app load.

    See lots more chattiness; very short fast requests, high interrupt rates, smaller requests but many more of them.  Looked at low-level profiling.  See fewer I-cache misses as compared to a large legacy apps.  Better I-cache behavior.  But data-cache miss rate is still very significant.

    Also different transactions have different scaling limitations.  For PetStore the Java persistence layer started falling off after 4 cpus.

     

    Stream Processing

    “Spade” – Cluster stream processing at IBM.  Continuous high-level streams.  Stream arrives for weeks & months; want to process and then emit stream data continously.  Want to express the problem with a language.

    Priorities – fast streaming on a cluster; then Generality; finally Usability.  Beyond StreamIt.  Like StreamIt, works with streams that can be split & joined.  Language is typed; stream contents are strongly typed. 

     

    Demystifying Magic – High-Level Low-Level Programming

    Seen this talk before, but this time given by Steve Blackburn.  Nice historical perspective on writing system software in C vs asm – and the parallels to writing sys software in Java vs asm.

    Java-In-Java
    Key Principle: Containment.  Write as much as possible (99%!) in pure Java, and dive into the low level “magic” bits as little as possible.
    Extensibility: Requires change quickly, languages change slowly.
    Encapsulation: contain low-level semantics
    Fine-grained lower of semantics: minimize impedance, seperation of concerns

    Framework:
    Extend Java semantics; intrinsics
    Scoped semantics changes
    Extend types; un/boxing, ref-vs-value, architecture sizes
    Pragmatic: retain syntax where possible to keep front-end tools working

    Introduce raw pointer type: “Address” looks like a Java type.
    E.g.: Address a = …;   … = a.loadByte(offset);

    Semantic Regimes:
    Additive: “its ok here to have unchecked casts”
    Subtractive: “no allocation via new allowed”, similar to HotSpot VerifyNoGC.

    Allow “unboxed” types – no v-table, so it’s a bare primitive type.  But this isn’t the default (but can be asked for).

    Abstraction results are good: can implement a JVM on bare hardware and with a simple change essentially virtualize the same JVM inside a GC debugging harness running inside another JVM (where the virtualized JVM’s heap is just a big array in the outer implementing JVM), etc.

     

    Grace, Safe Multithreaded Programming for C/C++ – Emery Berger

     

    Seen the paper before, but this is the talk by Emery.

    Fake sequential semantics on stock multi-threaded hardware using fork/join and process protection. 

    Grace is a pthreads replacement: “g++ -lpthreads” becomes “g++ -lgrace”.

    Speedups: in the absence of common benign races, Grace run programs run about 10% slower than raw pthreads – but have full sequential semantics.  It’s “as-if” all the parallel programs are run sequentially immediately at the thread spawn point. 

    CAN’T run applications where threads run forever, i.e. reactive or server apps.
    Works well with fork/join, Cilk, TBB, etc.

    So thread spawn becomes a unix fork with COW, use mmap for allow memory sharing.  At join points, smash in join’d threads memory updates via mmap.  Also need scalable thread-local malloc heap, plus aligned globals (to avoid false-sharing at the page granularity), plus some improved I/O.

    Some simple measures remove nearly all false sharing.  Big one: everybody mallocs into their own space.  2nd big one: spread the globals out 1 per page.

    Thread-spawn is as cheap as a fork on linux (has experiments to show it).  Due to thread-cpu affinity, if you spawn a bunch of threads they share a single CPU.  At the low end of thread-grain-length, fork is faster than spawn because the scheduler immediately spreads the processes across CPUs, whereas threads share for awhile.  So fork is actually faster than thread-spawn for awhile.

    Real performance limiter for Grace is conflicts & rollbacks, and not thread-spawn overheads.

    Performance is much better than e.g. STM, because after the 1st touch on a new page (and the SEGV & accounting), all accesses on that page run at full speed.

     

    GC Assertions: Using the GC to check heap properties

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.110.3949&rep=rep1&type=pdf

    Global knowledge is required to understand seemingly local properties.
    JBB example:

        Order order;
        for(…)
          order = ….;
        delete orderTable;
        ? are all ‘orders’ dead here?

    But actually ‘order’ is held by the last-customer-transaction (as an optimization, or convienvence for customer?).  Leads to a leak of ‘orders’.  Really: programmer does not understand program, too complex.

    Add assertions to the code, and use the GC to check the property.

    Cliff Notes: Azul is already gathering global heap properties during GC – but not asserts.  Gathering liveness & points-to info.  If points-to included a handful of sample real instances (and all the sames linked when possible), would be a very powerful way to instantly check some heap properties.

    Sample asserts:
    ‘assert_dead(p)’ – expect to reclaim ‘p’ at the next GC cycle
    Or region-collection: start_region(); … assert_all_dead();
    Or shape property: assert_unshared(p);
    Or ownership properties (members of a collection are ‘owned’ by the collection)

    If an assert fails, then do a slow crawl and provide the full path through the heap showing the failure.   Asserts only checked at GC points.

     

    Proving Proving Correctness of Abstract Concurrency Control and Recovery – Trek Palmer & Eliot Moss

     

    Transactions – closed nesting has issues
    Open nesting & Boosting need from programmer: conflict info & roll-back info.
    These are hard to provide correctly.

    This work: a description language of abstract data types or structures.
    Can describe the conflict predicates & inverses.
    Can prove correctness of the conflict & inverse expressions.

    Working with the abstract description of the data structure, and NOT e.g. with the real Java implemention.  E.g., no loops, no mutations, no recursion.  So the language isn’t a ‘real’ language, but can describe many kinds of ADTs in code that looks sorta like Java.

    Output isn’t a functional program, instead the output is the result of using a SAT solver to prove correctness. 

    XTN-semantics allows conflict detection to be proven correct pairwise, instead of having to do full interleaving.  Formally, a conflict
    predicate is correct iff it is true when the operations do no commute.

    The conflict predicate tells when 2 XTNs conflict, and the inverse allows an optimistic XTN to rollback.

    Obvious use-case: use this tool to write a transactional-version of NBHM or other JDK concurrency utilities.

     

    Hard Real-Time GC Scheduling

    – Periodic Scheduling
     – GC runs at highest priority, but periodically yields to mutator
     – Metronome
    – Slack-based Scheduling
     – GC runs at lowest priority
     – Can be preempted at any time
    – Makinac (Sun RTS)
    – Work based Scheduling
     – GC runs at allocation time
     – Problem with allocation-rate jitter

    HRT – Deadlines must never be missed AND must be verifiable analytically that no deadlines are missed.  Systems tend to therefore be very simple, so that they can be proven.

    OVM – like Metronome.  Dynamic defrag; arraylets; Brooks style barrier; replicating barrier/; incremental, concurrent, supporting slack-based style scheduling.  

     

    Understanding Performance of Interactive Applications

    Typical profilers give the wrong numbers: they report total time spent, not time spent between mouse-click and page updated.

    AWT/Swing & SWT – similar: single-threaded, gui thread in “event loop”, relies heavily on Listener model.  So profile using gui call-back Listeners between user events and the eventual display change.

    Really simple idea: profiling & event visualizer tool between click & view times.

    Program Metamorphsis

    Trouble w/refactorings: must preserve program behavior with each step.  But if we want multiple refactorings then at each step along the way fixup code (needed to preserve program behavior) pollutes the code.  Want to do multiple refactoring steps at once – while leaving the program possibly broken in the in-between steps.

    So compute a program-semantics (names & def-use chains), and let the user make partial refactoring steps and then declare “I think I’m done” – and the system compares the new program-semantics with the old, and reports if they are not equal. 

    So actually comes up with primitive not-quite refactoring steps, which he will compose to build larger “real” refactorings, or compose lots to build a multi-stage refactoring. 

    Instruction Selectors

    HotSpot C2 uses a BURS framework.  Very hard to optimize & debug.  So these guys have a semantics which can span both the ideal IR and real machine instructions.  Describe both using “CISL” and the tool does the mapping.

    Once the tool has a mapping, the user provides an adapter that converts the mapping into the code needed by the compiler back-end.  This would replace e.g. the ADLC.  Gives examples of machine encodings for PPC and Java bytecodes.

    CISL is given an IR tree and produces a target tree with equivalent semantics.

    Not sure it’s entirely better than BURS (maybe no worse), but I’m still sold on rewriting in e.g. C++/Java hand-written greedy match rules.  These code-gen-gen tools are pain for long-term maintenance.

     

    Do Design & Performance Relate?

    Is fast code required to be ugly?  Is beautiful required to be slow?

    Pick 200 code metrics.  Also pick some performance metrics (cycles, single threaded, objects created, etc).  Could not follow talk… or at least he wasn’t very clear with the objectives & results (if any).  Interesting idea though.

     



    Comments

     

     

     

    Hi Cliff,

    Do you have any pointers to the paper — if any — that went with the Rhodes Brown talk? Googling did not yield anything useful 😉

    Thanks,
    Andy

    Posted by: Andy Georges | Jun 4, 2009 12:43:54 AM

     


     

    Try: “Statistically rigorous java performance evaluation” and “Wake up and smell the coffee: evaluation methodology for the 21st century” as a start.

    Also this paper: “Relative factors in performance analysis” – late in section 5 shows that a minor offset in the VM’s text segment can vary performance by 6%; compiling in Intel HPM code (and not using it) can vary performance even more.

    But I can’t find a reference to the result that “changing your environment variables changes your stack/text placement changes performance by 10%” – but I’ve seen the effect & worked on fixing it before (this & a related effect triggered Motorola to do a i-cache code placement optimization in ~1995).

    Cliff

     

     

    Odds & Ends

    Various pending goodies…

    Load-Linked/Store-Conditional vs Compare-And-Swap

    Pretty much all modern CPUs include some form of instruction for doing an atomic update – required for shared-memory multi-CPU machines (X86 has lots and lots!).  There was a long period of debate in the CompSci community one what constituted the “minimal” needed instruction to do useful multi-CPU work.  Eventually the community has decided that the Compare-And-Swap (CAS) instruction and the Load-Linked/Store-Conditional (LL/SC) instruction combo both are (1) sufficient to do useful work (“infinite concensus number”) and (2) relatively easy to implement in real hardware.  X86’s, Sparc’s and Azul’s CPUs use CAS.  IBM’s Power, Alpha, MIPS & ARM6 all use LL/SC. ARM5 only has an atomic-swap.

     

    LL/SC and CAS are slightly different in how they work, leading to subtly different requirements on algorithms.  With LL/SC, you first Load-Linked a word.  The hardware marks the cache line as “linked”.  You then manipulate the word (e.g. add 1 to implement a simple shared atomic counter).  Finally you issue a Store-Conditional to the same address.  If the cache line is still “linked”, the store happens.  If not, then not.  The line remains linked as long as it does not leave this CPU’s cache; e.g. no other CPU requests the line.  Any attempt to take the line away causes SC failure (retry is up to the algorithm being implemented).  “Weak” LL/SC implementations can easily lead to live-lock – if Load-Linked requests the cache line in exclusive mode (required to do the Store-Conditional), then each LL causes all other CPUs to lose their “link” – and their SC’s will fail.  I suspect most modern implementations of LL do not request the line in exclusive mode – avoiding the obvious live-lock failure.  The downside is that a simple uncontended LL/SC on a word not in cache requires 2 cache-miss-costs: the original miss on the load, and a second miss to upgrade the line to exclusive for the SC.

     

    With CAS, you typical first load a word with a normal load, then manipulate it.  Finally you issue the CAS which compares the memory contents with the original loaded value: only if they match does the swap happen updating memory.  CAS can succeed if the original cache line leaves the CPU and returns holding the same value – this allows the classic ABA bug.  In some cases, Garbage Collection can nicely side-step the ABA bug; you can never find an aliased copy of an “A” pointer unless all copies of A die first – including the one loaded before the CAS.  Similar to LL/SC, there can be 2 cache-miss costs: one for the original load and again to upgrade the line for the CAS.  Azul has a load-exclusive instruction to avoid with this – a plain load but the line is loaded exclusively.  With CAS you can issue any other instructions between the original load and the CAS; typically with LL/SC there’s a small finite number of operations that can happen between the LL and the SC without losing the “link”.  E.g., guarding a one-shot init function by atomically moving pointer from NULL to not-NULL: with LL/SC you must load the line before the SC; with CAS no separate load is needed (e.g. an “infinite distance” between the original load and the CAS). 

     

    These atomic instructions only need to guarantee atomicity of update, not ordering w.r.t. other memory operations.  Nothing in the academic literature requires any sort of global ordering on these atomic instructions.  Instead the usual academic assumption is that all memory operations are strongly ordered – which is obviously not true on all modern hardware.  Practitioners are required to insert memory fences as needed to achieve the desired ordering.  Nonetheless most implementations of CAS include a strong ordering: X86 and Sparc certainly do.  Azul’s CAS does not, and this allows Azul to e.g. implement simple performance counters that do not drop updates and also do not force ordering w.r.t. to unrelated memory operations.  (As an experiment, try writing a multi-threaded program to increment a simple shared counter in a tight loop without locking.  Report back the %-tage of dropped counts and the throughput rate.  Then try it with an Atomic counter.  My simple tests show with a handful of CPUs it’s easy to achieve a 99%+ drop-rate – which basically makes the counter utterly useless).  I am less familiar with the fence properties of common LL/SC implementations.  Any of my gentle readers wish to report back on the situation for Power, MIPs, ARM, etc?

     

     

     

     

    build.java

    Some time ago I reported on a build tool I’ve been using.  Currently ‘build’ is being used to build HotSpot internally to Azul.  We’ve got it building 400+ files, plus build rules for building portions of the JDK, compiling w/gcc & javac and also ‘javah’, tar, jar, strip, binary signing, etc.  In addition to the typical parallel-make functionality, it includes auto-discovery of c++ header files, intelligent load-balancing of big builds, etc.  It’s blazingly fast, it’s easy to hack (being written in plain-olde Java).  I hunted around awhile and couldn’t find a good place to dump a medium-large blob of code, so here’s a sample build.java in 4 parts:

    Part 1
    Part 2
    Part 3
    Part 4

     


     

     

    A “HotSpot -server” aka C2 compiler IR visualizer

    No I didn’t do it!  This deserving grad student did it.

    Here is the reference to the visualizer for the ideal graph of the HotSpot server compiler: 

    http://ssw.jku.at/igv/master.jnlp 

    It is a JNLP application hosted on a university server (http://ssw.jku.at/General/Staff/TW/ has also a test file to download), but the easiest way to get a first impression.  The source code of the tool is hosted onhttp://kenai.com/projects/igv  The server compiler instrumentation is part of OpenJDK and also included in Sun’s weekly builds of Java 7. 

     

     

    More C2 Goodies

    John Cavazos wrote: 
    … One of the things I am currently looking at is determining the right phase-ordering of optimizations applied to a particular program being optimized.  I have some nice (un-published) results for JikesRVM, 
    but it would be nice to replicate the research for HotSpot… 
     

    I have strong arguments for nearly all orderings of our passes, so I’m curious to know about your phase-ordering results. 
    The only obvious transform we might be missing is PRE, but our peephole opts pretty nearly subsumes PRE (they are not mathematically equivalent – I can write programs where either PRE or the peephole stuff makes progress against the other).  In practice, PRE will find essentially nothing once the peephole opts are done.  You’ll notice that we do the peephole pass alot; it’s totally incremental and provably linear bounded.  In other words, if there’s nothing to be gained then there’s no cost in trying.  The peephole pass includes amongst other things all pessimistic forms of copy propagation, value equivalence, constant propagation (especially the null/not-null property), constant test folding, repeated test folding, dead code elimination, load-after-store opts (aliasing analysis is included for free during building of SSA form), algebraic properties, and lots more. 

    For HotSpot, the optimization ordering is: 

    • Build an inline tree, including class hierarchy analysis.  This is the one pass I’d be willing to move, as it happens too early.
    • (a single unified pass:) parse bytecodes, inline, build SSA form (the IR remains in SSA form always), do peephole opts over SSA form.  This pass typically costs 40% of compile-time budget.
    • Fixed-point the peephole opts over SSA form, once all backedges are known after parsing.
    • Loop opts round 1: “beautify loops” (force polite nesting of ill-structured loops), build a loop-tree & dominator tree, split-if (zipper-peel CFG diamonds with common tests, plus some local cloning where I can prove progress), peel loops (required to remove loop-invariant null checks)
    • Fixed-point the peephole opts over SSA form
    • Loop opts round 2: “beautify loops” (force polite nesting of ill-structured loops), build a loop-tree & dominator tree, lock coarsening, split-if & peel – but if these don’t trigger because there’s nothing to gain, the do iteration-splitting for range-check-elimination & a 1st loop unrolling.
    • Fixed-point the peephole opts over SSA form
    • Conditional Constant Propagation (the optimistic kind, instead of the pessimistic kind done by the peephole pass)
    • Iterate loop (split-if, peel, lock coarsen – but these typically never trigger again and take very little time to check), unrolling & peephole passes, until loops are unrolled “enough”.  On last pass, insert prefetches.  Typically this iterates once or twice, unless this is a microbenchmark and then unrolling might happen 8 or 16 times.
    • Remove tail-duplication, and a bunch of other minor code-shaping optimizations e.g. absorb constant inputs into deoptimization-info in calls, or commuting Add ops so that 2-address machines can do update-in-place.
    • Convert “ideal” IR into machine code IR.
    • Build a real CFG for the 1st time, including a dominator tree, loop tree.  Populate with frequencies from earlier profiling.
    • Global latency-aware (loop-structure-aware) scheduling.
    • Replace null-checks with memory references where appropriate.
    • Register allocate.  Internally this has many passes.  This pass typically costs 40% of compile-time budget.
    • Sort basic blocks to get good control flow ordering (forward branches predicted not-taken, backwards predicted taken, etc)
    • Some last-minute peephole opts.
    • Emit code into a buffer, including OOP-maps & deoptimization info.

     

     

    Travel Plans

    I got too much travel coming up!

    Apr 30, May 1st – DaCapo in Boston.  Favorite small group; GC & Java focused. Website: http://www.cs.tufts.edu/research/dacapo/.  I had to turn down the invite to anSSA seminar in France because the dates conflicted.  Very sad.  I hope one of the attendees will post a trip report.

    May 11-May 15.  IFIP WG2.4  (International Federation of Information Processors, Working Group 2.4 – a *really* old European group with a random mix of industry & academia.)  It’s a nice group to preview JavaOne talks, and always the meetings are a week-long rambling discussion in some quaint resort.

    June 2-June 5.  JavaOne.  I owe slides for 3 talks by end of this week.  ‘enuf said.

    July 6-10th – ECOOP, as a paid-for invited speaker.  A free vacation to Italy!   🙂