The conference will be over tomorrow and I’m quite satisfied with the outcome. Now for some comments about the announcements and (NDA permitting) about what I learned.
I had some vague idea of going to some iPhone sessions and letting presenters (or friends developing for the iPhone) convince me that I should start developing for it. No such thing happened; session overlap was so severe, and there were so many labs to go to, and people to talk to, that I skipped any non-Mac session or discussion. What little I heard in the corridors confirmed my notion that the current state of iPhone development and the AppStore deviates too far from my preferred position as a utility developer – a niche Apple still keeps closed on the iPhone. Maybe when the tablet comes out…
Speaking of tablets, while everybody agrees that one is in the works, it seems to be a year or so away from announcement. (Ditto for the new CPUs I hinted at, in my last post.)
Snow Leopard is the small new thing. Small for the user in a sense; it’s just refinements and greater speed. For developers, though, it’s the BIG new thing. And, as variously described as early as a year ago (can’t find URLs right now), much of the new stuff is driven behind the scenes by open-source projects Apple is driving: the Clang compiler, the LLVM back end, and the technologies made possible by Grand Central Dispatch, blocks and OpenCL. So, most of the sessions either expanded on this directly or offhandedly mentioned “there’s an API for that now – and it’s fully XYZ-enabled” (insert one of the technologies above).
These things have become possible because CPU chips had run into a clock frequency “sound barrier”; 3GHz is about the maximum current silicon can do without extensive and expensive cooling or exotic technology. So multicore has become the solution du jour: all Apple computers now have at least 2 core, and the top machine has 8 (16 virtual). Expect that number to double every 2 years, at least.
But for years multi-processor machines were hard to program. About 14 years ago, at another WWDC I bought a Genesis MP 528: this Mac clone had 4 PowerPC 604 processors running at a blistering 132MHz. It didn’t have much caching on those chips, and only Photoshop and a few other specialized apps could see more than one CPU, and that only for image filters. In two years the first PowerPC G3 CPU card, with a single processor but caching, running at 300MHz, had about the same Photoshop performance – and that performance was then available to all apps. So why didn’t more apps take advantage of the 4 CPUs? The classic Mac System 7 (to 9) had no easy way to allow for this; there was a very primitive multiprocessing API but the system was pretty much locked out of it.
As said in the keynote, Snow Leopard will support only Intel Macs; PowerPC Macs are, therefore, stuck in the Leopard era, and only some few bug fixes will appear on 10.5, then it’s over. I couldn’t find hard figures comparing the installed base; I’ve seen percentages quoted of between 10 and 35% of Macs Macs still in use being PowerPCs. I personally didn’t think this would dip below 25% before 2011; then again, as a stockholder, I’m glad Apple sold so many new Macs recently…
Some people question why PowerPC users will be left out of the Snow Leopard advantages, and I think I know why. While the top 4-CPU PowerPC machines still can hold their own with more modern machines under certain circumstances, the vast majority of PowerPC Macs have only 1 CPU; only a few big desktops have 2, and even fewer have 4. Most advantages of Snow Leopard come into play when you have at least 2 CPU cores, and there’s serious testing and bug fixing to be done for supporting an entire architecture. Apple probably just weighed those factors (with better numbers than I have available) and decided it wouldn’t work out.
Positives of the new Clang/LLVM combo: better compiler speed, better code optimization – both still starting out but they’ve more power in reserve, while the current gcc compiler and backends are pretty much maxed out; way better error messages, the Clang static analyzer is just awesome (a word I usually hesitate to use, but this really is!); lots of goodies to come from tighter integration with Xcode. Negatives: may still generate wrong/inefficient code in some circumstances; no C++ support yet (I don’t care myself about this one).
A sleeper advantage is, also, that the intermediate (LLVM) bytecode generated by Clang could possibly be stored as such inside executables, and be just-in-time compiled for execution on any target CPU. In other words, the same executable could run on a new machine Apple puts out, even if it has a new CPU chip/architecture, as long as the JIT compiler is in place for that; application developers wouldn’t have know (or care).
Regarding blocks (or “closures”, as they’re known in other places), they’re a syntactic convenience for programmers to pass executable code as data. As such, they make programs more readable. What makes them inordinately powerful in Snow Leopard is that they’re also the basic executable units for all of the cool new multiprocessing stuff in Grand Central Dispatch. Therefore, with a little discipline, it becomes easy for developers to chop up tasks into little slices than can be executed in parallel by however many CPU cores (or, with OpenCL, GPU units) are available to do them; and for the first time anywhere I know of, this facility is available throughout the system, even at a quite low level.
So, am I running off to convert all of my code to the new technologies? Well, yes and no. Many things still have to be done in a serial manner, and the system will do others in parallel behind my back. Also, it seems that writing a generic application that runs on both 10.5 and 10.6 (using the new stuff) is tricky; I’m still investigating how to best do it. Stay tuned for developments…