Solipsism Gradient

Rainer Brockerhoff’s blog

Browsing Posts tagged Clang

Apple’s (pre-)announcement of the Apple Watch left the tech world in the usual disarray. Is it an expensive knock-off of Android watches (people tell me there is such a thing!)? Is it an attack on the high-end Swiss watch market? Is it an attack on the low-end Japanese watch market? Is it an even more transparent lock-in attempt on soi-disant “Apple fanbois”? I’d answer “no” to all those questions, but right now I’m more interested in the hardware and software technology of the watch.

Notice that the above link doesn’t mention iOS anywhere, but this other link has the magic word: WatchKit. Quote: “WatchKit Apps. Soon your favorite apps will feature controls and interactions unique to Apple Watch, enabling you to enjoy them in dynamic new ways.

Speculations about WatchKit since then usually have mentioned one or two assumptions:

  1. WatchKit will be written in/accessible only from Swift;
  2. WatchKit apps will run under iOS on the Apple Watch.

The first is, of course, wishful thinking from developers investing in the new Swift language. The second is, in my opinion, completely unwarranted and I’ll try to explain why.

This post is the most plausible so far: “WatchKit apps will ship as embedded binaries in iPhone apps, using the same basic principals [sic] as iOS 8 extensions. There will be some mechanism for the watch paired to an iPhone to detect and automatically install these ‘apps’ based on what is available on the paired iPhone. Delete the container app from the iPhone, it disappears from the watch. Xcode will have a template to add a WatchKit app to an iPhone app project.

Let’s back off WatchKit for a second and look at what we’ve seen of the hardware. The entire main board is shrunk down to a single unit: the S1. If you stop the middle introduction film at 4:46, you’ll see that it’s really a collection of chips and SMT components on an encapsulated multilayer board — not really a “single chip” as the narration says, but many large CPU “chips” nowadays are like that, too. Other than the S1, there’s of course the “Taptic Engine” assembly which does the wrist tapping, the crown sensor assembly, antennas and display, and the most important part: the battery.

Battery life is the make-or-break feature of the Apple Watch. iFixit’s disassembly of the Moto 360 watch shows why: there’s a square peg battery inside a round casing, rated at 320 mAh. Even though Motorola apparently build their own batteries, they don’t have enough volume to do a round one. Apple doesn’t have a volume problem and their casing is square, so they’re free to use all remaining volume for a longer-lasting battery.

The 320 mAh rating and the typical battery life of 12 hours of the Moto 360 means that the watch consumes, on the average, just under 27 mA. But they run Android on the watch, using an off-the-shelf TI ARM processor with attached RAM, flash memory, and so forth, so that figure is not surprising. In other words, it’s a stripped-down cellphone/MP3 player.

Suppose that Apple did its usual optimization of battery size, usage, etc., in a stripped-down iPod nano. It’s half the size of the nano, which has a 30-hour life, so we can assume half the battery, meaning 15 hours. OK, that would be marginally acceptable, perhaps.

But remember, the Apple Watch needs an iPhone nearby. In fact, many of the published functions, such as Siri, cellphone call response, GPS and so forth certainly use the iPhone’s hardware and software for that. Remember that one of the culprits of excessive battery usage is generic apps and processes running on the device. Remember that Apple, since the first iPod in 2001, has been very aggressive in optimizing their embedded systems. Remember that the first iPods and iPhones didn’t have any generic apps running on them, either. Remember that Apple already has technologies like Clang, OpenCL and Metal…

All that said, why run iOS and generic applications on the Watch at all? So here’s what I think likely about the real implementation.

  • Watch OS (or whatever it’s called — did they explicitly call it anything?) will not be a stripped-down iOS; maybe even not a Darwin derivative. It will be a highly optimized embedded system that has a few apps running in as few processes as possible. It will be very robust because it will be able to do only a fixed set of functions.
  • In other words, it will run only those things that may run while the paired iPhone is not available; we don’t know yet, but that might be just the timekeeping and pulse measuring apps. If the iPhone is there, the Watch will also work as a specialized I/O and display device for the apps installed there.
  • WatchKit will run on the paired iPhone inside a special server process; a matching iOS app will show installed Watch apps — probably those apps will be from the normal App Store, since they usually will have an iOS counterpart.
  • So, an installed Watch app will have at least some sort of preference app or pane on the iPhone; no use typing in passwords and such on the Watch, right? The part written in/for WatchKit will contain a server plugin that does the heavy lifting, data collating and communicating with the outside world, but it will also contain the application logic itself, commanding the Watch to do or display certain things.
  • I don’t mean to imply that the Watch will run a full WebKit client and the iPhone a web server, that might be overkill. Perhaps a useful subset of that, perhaps some variation of Display Postscript, some interpreted command language, or just a sequence of drawing orders? The important part is that there’ll be a single process on the Watch for doing the UI, and all the application-specific parts can be offloaded to the iPhone.

One consequence is that you can forget the idea of “jailbreaking” the Watch to connect to a non-iPhone, of course. Another one is that battery life might be at least a day, maybe even two or more. Nothing on Apple’s site so far contradicts any of my reasoning.

So, will WatchKit be accessible from Swift apps? Certainly. Will it itself be written in Swift? I doubt it for now. Maybe in iOS 9 some of the frameworks in iOS (and OS X) will have been rewritten, assuming that by then the Swift optimizer will be good enough. But that won’t be the case in a few months.

Possible but unlikely: WatchKit may have an API to download actual application code to the S1, which may (or may not) have an ARM-like architecture. Only in such a case — and since there will be no Cocoa/iOS frameworks on the Watch — I would expect the downloaded code to be in Swift (without optionals!), for extra safety; can’t have the Watch crashing and rebooting, right?

Update: Marcel Weiher kindly reminded me of CarPlay, which apparently works like that; nobody would say that cars are running iOS. On the other hand, in that case, the device is connected over USB (that is, reasonable bandwidth) and the car doesn’t have any battery life problems.

Comments welcome.

Over two weeks ago, Apple at WWDC announced something entirely unexpected: thousands of new APIs and a brand-new programming language, Swift. No hardware, of course; it’s a developers conference, remember?

Reactions varied all over the spectrum. Non-developers (especially “industry analysts”) mostly had no idea what it meant: they said Apple had announced “nothing”. Almost all developers, however, were ecstatic — “the most significant event Apple ever staged“. Regarding Swift, this initial enthusiasm diverged as soon as people read the (relatively sparse) documentation and actually began to play around with the language — a very early beta version was available for download soon after the announcement. Hilarity, chaos and pandemonium ensued; tension, apprehension and dissension had begun.

As usual, almost everybody tried to project their grievances, expectations and experiences onto the new language. The open-source advocates griped that no source was available. The cross-platform advocates complained that there was no version running/compiling for Android (as if Apple would have any interest in promoting that!). The Objective-C programmers unsuccessfully tried to translate their code into Swift and complained that there was only limited dynamic dispatching and introspection. The C programmers complained that there were no preprocessor macros and that Swift seemed to be “Objective-C without the C”. The Haskell/Erlang/Scala programmers complained that many functional programming facilities were missing, and that the language was “too mutable”. The Java programmers complained that the language was “too C++-like”, but resented the lack of exceptions. The C++ programmers also resented the lack of exceptions and wanted std::somethings. The Type Theorists complained that generics were “not generic enough”. JavaScript programmers… well, you get the idea. Almost everybody complained about Array mutability semantics, about missing semicolons and the parsing of whitespace, and (of course) said that the syntax “looked weird”. Serious fights erupted on Twitter, disagreeing on whether Swift was a “modern” language and what Apple’s intentions were.

And, as always happens, many people said, in effect, “OMG Apple you’re soo stooopid WTF fix this now!”. This is the usual symptom of looking at the surface and not understanding what might be happening underneath.

Voluminous disclaimer and sidenote with historical digressions:

Many of the complaints in the paragraphs above are condensations of what I understood people to be saying and none are meant to be actual live quotes — which is why I didn’t link to any specific instance. I’m not interested in discussing most of these personally right now, thank you.

I’ve been programming since 1969,  in C since 1984, in Objective-C since 2000. I wrote only one application in C++ back in the Classic days — it was pretty much mandatory in the CodeWarrior/PowerPlant days. I did my CS degree in the early 1970’s, when “modern” language still meant ALGOL 68 – see the mind-boogling official reference (large PDF).

When BYTE Magazine‘s special Smalltalk issue came out in 1981, I was very interested, but couldn’t come to grips with the weird syntax. I bought Adele Goldberg‘s classic books about Smalltalk — the blue book (large PDF), the orange book (large PDF) and the green book (large PDF) — and periodically tried to understand them; very difficult without access to a working compiler! In the late 80’s I put these aside (and, unfortunately, lost them in a move). After Apple acquired NeXT in 1996, I became aware of Objective-C’s roots in Smalltalk, but didn’t give it much thought.

Around 2000, restarting my work as an indie developer, I started programming in Objective-C and Cocoa. As an experienced C programmer I had little difficulty with Objective-C, and quickly got used to the nested [[ ]]s. I never wrote a full Carbon app as such. I also never managed to acquire a working Smalltalk compiler, even after a few became available on the Mac. However, a couple of years ago, I found the Smalltalk books in PDF format (as linked above) and was astounded: the formerly opaque things about methods, messages, dynamic dispatching, objects and so forth — suddenly all was clear and obvious! That’s the advantage of using-while-learning, at least for me.

Unlike many colleagues I never hesitated to go beyond Cocoa, always using CoreFoundation, BSD/Darwin and a variety of interfaces according to necessity, and once manual memory management became ingrained, tossing objects and buffers back and forth between the various APIs. Except for short utilities for my own use, I haven’t adopted ARC yet — I found too many edge cases for my established programming habits.

 So, back to Swift. It really appears to be a very pragmatic language. If you look at the generated library header (in Xcode, command-doubleclick on any Swift type to see it), nearly all operators and types are defined there, in often surprising detail. In other words, few language features are hard-wired into the parser/compiler – the Swift library/runtime and the pre-LLVM optimizer are, instead, responsible for the language and its implementation details, and therefore more easily twiddled if necessary.

This is, of course, very convenient for Apple: a small team could tinker around with all aspects of Swift while leveraging most of the existing LLVM infrastructure and keeping up with the latest changes in iOS and OS X. Indeed, in retrospect, it appears that Swift was even driving many of those changes!

Let’s look at a brief timeline to explain what I mean:

  • 2000-2002: Chris Lattner‘s masters thesis on LLVM;
  • 2005: Lattner hired by Apple; Apple uses LLVM for the OpenGL shading language in Mac OS X 10.5;
  • 2006-2008: Apple introduces experimental llvm-gcc in Xcode 3.1; “blocks” and GCD appear;
  • 2009: Apple introduces Clang as an alternative for gcc; OpenCL and Clang static analyzer appear;
  • 2010: Lattner begins working on Swift; Clang fully supports C++ and llvm-gcc is the default compiler;
  • 2011: gcc/gdb are discarded, Clang/lldb are defaults, ARC introduced in Xcode 4.2;
  • 2012-2013: iOS/OS X are fully built with the new infrastructure, Objective-C literals in Xcode 4.4;
  • 2013: Lattner becomes head of the developer tools department;
  • 2014: Swift comes out in Xcode 6.0.

The LLVM team (Lattner, Evan Cheng who is also at Apple, and Vikram Adve of UIUC) also received the 2012 ACM Software System Award, and of course, LLVM, Clang, LLDB are open-source projects being driven forward by many people who also deserve lots of credit.

Nevertheless, it’s tempting to see all this as Chris Lattner’s plan for world domination… just picture him stroking a white cat and going “mwahaha!” 🙂 [Update: Thanks to @darth for the illustration!]

But really, all this points to progress in Apple’s platforms being driven by a consistent plan to modernize and implement new technologies everywhere; even hardware was affected, as the Apple A6 CPU (and no doubt its successors) were designed in parallel with the corresponding LLVM code generator. Similarly, from 2009 forward, software advances like ARC, blocks, GCD, runtime modernizations etc. are now seen as preparing the ground for Swift at all levels.

Sidenote:

A few years ago I posted about Apple’s hardware options being enabled by LLVM, and with the A6 that has indeed begun to happen. Apple’s in position now to design their own CPU and just have to write a new optimizer backend for it — and switch architectures in new hardware without users, or even developers, noticing any significant change.

When I began studying programming languages and compilers, UNCOL was the holy grail of programming:  a universal intermediate language to adapt any high-level language/compiler to any machine architecture. LLVM is the first implementation of that.

What does all this mean for Swift? Contrary to what you may hear from some quarters, it’s not an amateurish, ham-fisted attempt at locking developers in to Apple’s “walled garden”. As Apple has said publicly, it’s a systems programming language that ties in to key Apple technologies. I don’t doubt that it’s already being deployed internally and we can expect to see key low-level frameworks — Security, dyld, IOKit are candidates which come to mind — rewritten in Swift as soon as feasible. In the long run, the kernel itself, Core Foundation and others may follow suit; picture “SwiftKit” unifying much of AppKit and UIKit. Making Swift available to developers at this beta stage is good policy but probably not Apple’s primary focus.

But, you may ask, why not use C++ or the hybrid Objective-C++? Why not use a “modern” cross-platform language? What was wrong with Objective-C anyway?

Well, there’s a reason so many low-level frameworks are written in C++ or pure C: runtime speed. Objective-C’s dynamic dispatching has vastly improved over the years but is still a bottleneck, and in 95% of cases is not really necessary — we rarely use id, and strong typing is encouraged everywhere. As for pure C code, when you look at it, there’s always tons of crufty #defines, tricks to avoid C’s legacy problems, spinlocks and stack arrays and overflow checks and… so it’s no wonder Apple decided to start anew with a new language that avoids all of those problems and still interoperates with Cocoa etc. — all while the infrastructure’s being changed underneath.

So, why not C++? Lattner is a C++ wizard, right? All of Clang/LLVM is coded in C++. So is WebKit, Apple’s other major open-source success. I can’t see that changing, and their effort to fully support all of C++’s experimental future features argues that it won’t change. But C++ doesn’t look like a good match for internal Apple technologies like GCD and ARC, and the C++ Standards Committee is certainly not interested in adopting those. On the other hand, judiciously adopting certain things like generics, operator overloading and optimized dispatching is certainly a good thing. And last but not least, Apple now owns/controls the entire toolchain and the systems programming language!

More later; I’ve started to write an entire application in Swift and after that may feel qualified to comment on language details. For now, I’m quite happy with the prospects.

Re: iPad time

1 comment

Interesting piece by Sean Heber aka BigZaphod.  Remember my post about the significance of Clang/LLVM on the iPad?

I think there’s a chance that Apple is slowly building Objective-C into a managed environment similar to Java/.NET. At some point in the future they could define an Objective-C HD (or whatever :P) that no longer maintains total compatibility with C. Since they use LLVM a lot now, they can even use that to analyze your code to make sure that pointer accesses are safe and controlled. Anything that isn’t safely confined to your own app would be an error. Access to the Objective-C runtime functions could possibly even be revoked. After which point, Objective-C HD no longer compiles to machine code but instead to an intermediate representation.

By doing something like this, they can abstract the actual underlying CPU hardware and architecture out of the applications themselves as well as maintain a truly safe sandbox where private and undocumented APIs simply will not be allowed to work. Apps on the App Store would be submitted in this intermediate format which they can translate into the machine code that’s native to whatever CPU happens to be in the device you’re downloading the app for or they could simply put a JIT in iPhoneOS (although there’s no reason to waste the CPU cycles on the device if they can translate them once on the backend – at least for mobile stuff).

Pretty much complementary to my reasoning.

Re: iPad time

No comments

richardl wrote:

Rainer, it is interesting to note that Adobe built their ActionScript/Flash compiler for iPhone on top of LLVM.

http://cs.illinois.edu/news/2009/Oct8-2

http://www.adobe.com/devnet/logged_in/abansod_iphone.html

Thanks, I wasn’t aware of that. However, I don’t think that this invalidates my original argument, which is more based on Clang than on LLVM. Christopher Lloyd also remarked on this:

…Yet Adobe uses llvm for the compiled Flash, so they don’t like Adobe’s parser? silly. http://bit.ly/MQtRZ

Also thanks to Daveed Vandevoorde, who remarked that the LLVM bytecode isn’t as platform-independent as I thought – things like sizes of variables and ABIs (Application Binary Interfaces) stand in the way. Still, it’s not impossible to see Apple defining a generic target platform, in terms of sizes of pointers, integers and so forth, and a specific endianness and runtime ABI, and then staying within the bytecode for that.

Re: iPad time

1 comment

Via John Gruber, I just saw an interesting post by Steve Cheney:

It?s pretty evident that Apple isn?t wed to individual suppliers. Not only are they back to creating their own chips, but they are also one of the only ‘compute’ companies to have used each of the top 3 processor architectures over time – ARM, x86, and Power PC.

…This week Apple confined developers to a specific set of tools (XCode [sic]).

…By telling developers to move to XCode tools, Apple is setting the stage to potentially switch architectures.

…In 2003, Apple advised developers to switch to XCode tools. …2 years later Apple moved to Intel across its entire Mac line.

…perhaps the A4 is NOT an ARM architecture. In fact, it?s highly possible that the A4 is a dual core Power Architecture…

The last sentence is of course false, as Gruber says; the A4 does run ARM code.

While I don’t think that forcing developers to switch to Xcode (which is the correct spelling) means that Apple will soon be switching architectures on their iPad/iPhone line, or on their desktop/laptop line for that matter, Xcode does offer developers a future-proof environment that hasn’t been commented on by other observers: the Clang/LLVM project. Briefly, Clang is a compiler frontend for C-based languages, backed by LLVM – which stands for “Low Level Virtual Machine”. Both projects are heavily backed and staffed by Apple.

Clang has been increasingly supported by Xcode as a substitute for the gcc compiler toolchain. The details are quite esoteric, but one interesting capability is that C-based languages – C, Objective-C and C++ – are compiled to the LLVM bytecode, which is then translated into native machine language by a back-end. The last phase could even happen in a just-in-time fashion, allowing apps to be distributed in LLVM code (therefore running on all current and future Apple machines). Some groups are even working on chips that execute LLVM bytecode directly.

In other words, it’s no coincidence that Apple is now instructing developers to switch to Clang-supported languages and their Clang-wrapping IDE (Xcode). There may not be an architecture switch coming soon, but Apple will have much more freedom in doing their own CPUs for iPad/iPhone, and more ammunition in negotations with Intel and other top-end chip companies.

Re: WWDC 2009

No comments

The conference will be over tomorrow and I’m quite satisfied with the outcome. Now for some comments about the announcements and (NDA permitting) about what I learned.

I had some vague idea of going to some iPhone sessions and letting presenters (or friends developing for the iPhone) convince me that I should start developing for it. No such thing happened; session overlap was so severe, and there were so many labs to go to, and people to talk to, that I skipped any non-Mac session or discussion. What little I heard in the corridors confirmed my notion that the current state of iPhone development and the AppStore deviates too far from my preferred position as a utility developer – a niche Apple still keeps closed on the iPhone. Maybe when the tablet comes out… icon_neutral.gif

Speaking of tablets, while everybody agrees that one is in the works, it seems to be a year or so away from announcement. (Ditto for the new CPUs I hinted at, in my last post.)

Snow Leopard is the small new thing. Small for the user in a sense; it’s just refinements and greater speed. For developers, though, it’s the BIG new thing. And, as variously described as early as a year ago (can’t find URLs right now), much of the new stuff is driven behind the scenes by open-source projects Apple is driving: the Clang compiler, the LLVM back end, and the technologies made possible by Grand Central Dispatch, blocks and OpenCL. So, most of the sessions either expanded on this directly or offhandedly mentioned “there’s an API for that now – and it’s fully XYZ-enabled” (insert one of the technologies above).

These things have become possible because CPU chips had run into a clock frequency “sound barrier”; 3GHz is about the maximum current silicon can do without extensive and expensive cooling or exotic technology. So multicore has become the solution du jour: all Apple computers now have at least 2 core, and the top machine has 8 (16 virtual). Expect that number to double every 2 years, at least.

But for years multi-processor machines were hard to program. About 14 years ago, at another WWDC I bought a Genesis MP 528: this Mac clone had 4 PowerPC 604 processors running at a blistering 132MHz. It didn’t have much caching on those chips, and only Photoshop and a few other specialized apps could see more than one CPU, and that only for image filters. In two years the first PowerPC G3 CPU card, with a single processor but caching, running at 300MHz, had about the same Photoshop performance – and that performance was then available to all apps. So why didn’t more apps take advantage of the 4 CPUs? The classic Mac System 7 (to 9) had no easy way to allow for this; there was a very primitive multiprocessing API but the system was pretty much locked out of it.

As said in the keynote, Snow Leopard will support only Intel Macs; PowerPC Macs are, therefore, stuck in the Leopard era, and only some few bug fixes will appear on 10.5, then it’s over. I couldn’t find hard figures comparing the installed base; I’ve seen percentages quoted of between 10 and 35% of Macs Macs still in use being PowerPCs. I personally didn’t think this would dip below 25% before 2011; then again, as a stockholder, I’m glad Apple sold so many new Macs recently… icon_wink.gif

Some people question why PowerPC users will be left out of the Snow Leopard advantages, and I think I know why. While the top 4-CPU PowerPC machines still can hold their own with more modern machines under certain circumstances, the vast majority of PowerPC Macs have only 1 CPU; only a few big desktops have 2, and even fewer have 4. Most advantages of Snow Leopard come into play when you have at least 2 CPU cores, and there’s serious testing and bug fixing to be done for supporting an entire architecture. Apple probably just weighed those factors (with better numbers than I have available) and decided it wouldn’t work out.

Positives of the new Clang/LLVM combo: better compiler speed, better code optimization – both still starting out but they’ve more power in reserve, while the current gcc compiler and backends are pretty much maxed out; way better error messages, the Clang static analyzer is just awesome (a word I usually hesitate to use, but this really is!); lots of goodies to come from tighter integration with Xcode. Negatives: may still generate wrong/inefficient code in some circumstances; no C++ support yet (I don’t care myself about this one).

A sleeper advantage is, also, that the intermediate (LLVM) bytecode generated by Clang could possibly be stored as such inside executables, and be just-in-time compiled for execution on any target CPU. In other words, the same executable could run on a new machine Apple puts out, even if it has a new CPU chip/architecture, as long as the JIT compiler is in place for that; application developers wouldn’t have know (or care).

Regarding blocks (or “closures”, as they’re known in other places), they’re a syntactic convenience for programmers to pass executable code as data. As such, they make programs more readable. What makes them inordinately powerful in Snow Leopard is that they’re also the basic executable units for all of the cool new multiprocessing stuff in Grand Central Dispatch. Therefore, with a little discipline, it becomes easy for developers to chop up tasks into little slices than can be executed in parallel by however many CPU cores (or, with OpenCL, GPU units) are available to do them; and for the first time anywhere I know of, this facility is available throughout the system, even at a quite low level.

So, am I running off to convert all of my code to the new technologies? Well, yes and no. Many things still have to be done in a serial manner, and the system will do others in parallel behind my back. Also, it seems that writing a generic application that runs on both 10.5 and 10.6 (using the new stuff) is tricky; I’m still investigating how to best do it. Stay tuned for developments…

Photos licensed by Creative Commons license. Unless otherwise noted, content © 2002-2024 by Rainer Brockerhoff. Iravan child theme by Rainer Brockerhoff, based on Arjuna-X, a WordPress Theme by SRS Solutions. jQuery UI based on Aristo.