Build Systems and Make

A weekly podcast with Cliff Click talking about all things to do with programming, programmers and computer performance.

This is a short talk on build systems and the olde school “make”



6 thoughts on “Build Systems and Make

  1. Great podcast. I also like make, I recall a distributed make build process I created a few years back that really sped up our build process.
    It was a smallish team (approx. 10 developers) writing in C with approx. 300 to 400 source files.
    I renamed make.exe and created a new make.exe wrapper that dup’d stdout and stdin to 1-byte blocking pipes. Because make always echos the command before executing it, I defined which commands were non-serial and sent them to another machine for execution. (I also created my own cc.exe wrapper to do the lying/distributing)
    e.g. for cc.exe echo I would “lie” to real make (return 0 “success”) and send the actual cc command to another system, but when any non-cc.exe command was echoed, I would stop reading stdout pipe which would “pause” make’s execution until all of the distributed cc.exe jobs finished. I also had some code that reassembled the various cc.exe stdout/stderr outputs to their makefile order. The coolest thing was that this required absolutely zero changes to the developers existing makefiles. I scaled it up to 10 “slave” PC’s and kept getting faster, just compiling over a network connection, leaving the cc.exe .obj files in the make system’s original directory. Official build times went from 6 hours down to 15 minutes.
    It was the most fun I’ve ever had working on a build system.

    • Wow, that’s an amazing war story! I’ve made some crazy build systems (distributed perl scripts? Running on 3 different cpus / 8 different OS’s) but that one takes the cake.

  2. I can’t disagree strongly enough with checking binaries into git. Git downloads its entire history onto every client, so you get EVERY binary EVER stored in Github, and when you clone, they ALL come down to your laptop, no matter how long ago they stopped being relevant. Your build system should be smart enough to not check the internet if it already has an artifact. You shouldn’t abuse source control to work around a bad build system.

    • Been back and forth on this myself a few different times. In the long run, for building a complex project with a large team I found it was *always* more reliable to *never* download bits that went into the product… unless they came from the source-code control system. Disk is cheap, bandwidth is not insane for the download… but also there’s Perforce for which this will run blinding fast. I’d ditch Git before I’d ditch the “all binaries are source-code-controlled” operating theory. That said, never had a speed problem with Git, it’s just the very first download is a little slower.

      • I’m not sure I really got the idea. Cliff, do you suggest to store all my artifacts in Git?
        Let’s say my company has three independent artifacts (for example JARs): A, B, C.
        Artifact A depends on B and C. Should I store B.jar and C.jar in A git repo?
        How about to 3rd party dependencies? Should I store all of them in my A git repo too?

        • Obviously you have to decide where to draw the line.
          If it’s trouble to get it, it goes into “Git”.
          Otherwise, I usually have a “make init” that checks for, and downloads-as-needed specific library versions.
          If you have a lot of 3rd party dependencies then that’s trouble already. Sometimes unavoidable, but I try really hard not to go there.

Leave a Reply

Your email address will not be published. Required fields are marked *