Solipsism Gradient

Rainer Brockerhoff’s blog

Browsing Posts in Development

State of the iPhone

No comments

So, half a dozen softwares are now out there that unlock the iPhone in various ways. In the simpler case, they allow the installation of various third-party applications and/or twiddling details. In the more complex case, they mess around with the various phone/SIM settings to allow the iPhone to be used with other provider’s SIM cards.

As I wrote before, Apple has apparently allowed this to happen by not implementing strict security measures. Now that the various unlocking techniques have stabilized, Apple has announced that an upcoming software update might cause “modified” iPhones to become “permanently inoperable”. Just a few days later, the iPhone update to version 1.1.1 came out; it featured the same warning in bold on its installation screen; and it did, indeed, cause some modified iPhones to lock up – the new vernacular is “bricked”, which I think somewhat of an exaggeration. Furthermore, the new software seems as tamper-resistant as the iPod touch software, indicating that Apple has checked out current unlocking techniques and implemented harder locks.

So far, all that was to be expected. What was, to me, unexpected was the reaction of some sectors of the press and of some users – mostly the same people who opposed the iPhone price cut, it seems.

Legally, it seems Apple is in the clear. The warranty and license agreement clearly say that any such tampering is at the phone owner’s risk. Surprisingly, some people seem to feel “entitled” to get warranty support even if they completely disobeyed the license! (Just as they felt “entitled” to have the price kept constant for a long period after they bought it, I suppose.)

The core of the argument seems to be “I paid for the machine, therefore I have the right to do whatever I want with it…” (I completely agree so far) “…and Apple has the obligation to give me full support, warranty and updates no matter how I mess with it!” Now here is where we part company. Sure, I suppose current consumer protection legislation may sometimes be interpreted that way (note I’m not a lawyer and less familiar with US legislation than with the Brazilian one); but you surely can’t pretend that Apple is a public utility or a non-profit charity.

Even from the technical standpoint, these expectations are unreasonable; allow me to explain this in more detail. The problem is one of “state“, in this case defined as ” unique configuration of information in a program or machine”.

In the first computers, the state of the computer was completely predictable when it was turned on: if it had Core memory, it was in essentially the same state it had when it was last turned off, and if the computer had reasonable power supply sequencing, you could just press the start button and continue. For more complex machines this was too hard to do, and the manufacturers declared that the machine was in an undefined/unreliable state after power-on, and that therefore you had to reload the software. For newer machines with semiconductor memory, everything was of course lost during power-off, and software reloading became equally necessary. To do so, you had to enter a short program in machine language using the front-panel toggle switches; this program, in turn, would read the actual software you wanted to run from a peripheral. This was the direct consequence of the machine coming up in a “null state”.

It wasn’t long before people thought of several ingenious ways to make this process more convenient. On the IBM1130, for instance, the hardware was set to read a special card from the punched-card reader, interpret the 80 columns (12 holes each) as 80 16-bit instructions, and execute them. The most commonly used of these cards simply repeated the process with the built-in cartridge disk drive, reading the first sector on the cartridge and executing it. Later on, the falling cost of ROM led to the boot software simply being built into the machine – the Apple II had a complete BASIC interpreter built-in, for instance. The apex of this evolution was the original Mac 128, where most of the system software was in the boot ROM – the system disk simply contained additions and patches. (The QI900 microcomputer I helped design in the ’80s had all system software, with windowing, multitasking and debugging, built into its ROM.) Here we had a well-defined “state” when the machine came up – it would execute a well-known program, and do predictable things, until external software came into play.

In the ’90s the limitations of this became apparent. OSes grew to a size beyond what could be stored in ROM, and no single Boot ROM could do justice to all models and peripherals (*cough* BIOS). Flash memory came up, the built-in software was renamed to “firmware”, and updates to that became commonplace. It was easy to “brick” a system if power went out or if you otherwise interrupted a firmware update before it was complete. In that event, a motherboard swap was usually the only solution, because the interruption left the firmware in a partial, nonworking “state”.

Consider now the iPhone. Its entire system (OS X 1.x) is built into firmware, mostly in a compressed state. This is expanded and run by the main ARM processor, obeying a built-in boot ROM. Supposedly, there are at least two more processors, taking care of network communications and of the cellular radio; each of these has its own boot ROM, and the radio processor has separate flash memory to hold state information regarding the SIM card, cellular system activation and so forth. One of these processors no doubt controls the USB interface to allow the main processor’s flash memory to be reloaded externally. Furthermore, every SIM card also has flash memory on it, containing the IMSI number, network identity, encryption keys and so forth, bringing one more source of complexity to the process.

In other words, you have a complex system of at least 3 processors interacting, each one with a boot ROM, two with flash memory containing state information. Powering up such a beast is a complex dance of each one waking up, testing its peripherals, checking its own state, then trying to talk to each other, then communicating to bring the entire system into a working state. Furthermore, the necessities of the cellphone system and of testing out such a complex piece of hardware mean that the iPhone must decide, on each power-up, in which of several states it’s in: factory testing, just out of the box, activated, reloading the main firmware, working, “plane” mode, and so forth. This is usually done by writing special values to reserved sections of the various flash memories, and of making sure they are always consistent with each other by checksumming and other technical arcana. Should they be found inconsistent, the system will probably try to regress to a simpler state and start over there, in the extreme throwing up its metaphorical hands and plead to be returned to the factory. Ideally, firmware writers strive to make it impossible to “brick”, unless an actual hardware defect occurs, of course; in practice, it’s rarely possible to envision all possible combinations of what could happen, and too few designers do assume a malicious agency is trying to trip them up at all times.

So, what do these various hacks do to unlock the iPhone? They rely upon bugs in the communications software, firstly, to make the system fall back into a state where it pleads for an external agency to reload its main firmware; cleverly substituted instructions then make it do new things. After several, progressively more complex, phases of this, new applications can be installed. Up to this point, only the main flash memory has been affected and installing a new software update will just bring the system back to the standard state. Now, one of the new applications may try to mess with the radio firmware; it will clear or set regions of it to bring the radio processor’s state out of step with reality, or even write bogus activation data into it.

Now, of course, the system’s state has been moved completely out of the state space envisioned by its designers. When it powers up, the state is sufficiently consistent – the various checksums check out OK, for instance – for the various processors to confidently start working. However, a few actual values are different from the intended ones – enough to let a different SIM card work, say. Now, if the hackers had the actual source code and documentation available, all this could be done in a reliable way. But this not being the case, they had to work by testing changes in various places and observing what happened, clearly not an optimal process.

Consider, now, the software update process. It assumes that the iPhone’s various processors and firmware(s) are in one of the known states – indeed, this is required for the complex cooperation required for uploading new software. If this cooperation is disrupted, the update may not begin – leading to an error message – or, worse, it may begin but not conclude properly. At this point, one or more of the iPhones processors may try to enter a recovery routine, either wiping the flash memories or to reinitialize them to a known state. No doubt this will be successful in most cases, and the new update will then be installable on a second attempt. However, the recovery may fail – since the exact circumstances couldn’t be foreseen – or it may be assuming false preconditions (like, a valid AT&T SIM card being present). The system will probably try to recover at successively lower states until falling back to the “can’t think of anything more, take me back to the factory” mode; or it may even lock up and “brick”.

Should Apple’s firmware programmers have tried to prevent this from happening? Well, up to a point they certainly did, as many problems other than hackers can cause such errors – electrical noise, badly seated or marginally defective SIM card, low battery, for instance. The system has to fail gracefully. However, it’s certainly not reasonable to expect them to specifically recognize and work around (or even tolerate!) the various hacks; after all Apple’s contract with AT&T certainly requires them to evidence due diligence in preventing that.

Firmware for such a complex system evolves continuously. The new 1.1.1 iPhone software seems to do many things differently from the original version, even though much of the UI is the same; same goes for the iPod touch software. Neither has been hacked as I write this. Did they now put TrustZone into operation? No idea; time will tell. My hunch is that Apple will eventually come out with an SDK for third-party applications sometime; the question is when. Perhaps after Leopard, perhaps at the 2008 WWDC. Does Apple need AT&T, or any partner carrier, at all? Maybe for now they do, and the unlocking wrangle will continue. In the long run, Apple will be better off with a universal phone that will work anywhere; possibly we’ll have to wait for the current generation of carriers to die before this happens. Interesting times.

Oldie but goodie

No comments

It’s been about 38 years since I first saw a somewhat simpler version of this:

Tree SwingDark Roasted Blend for reminding me.

Update: many more details about the drawing. Its origins still appear to be obscure.

Re: Flipr out

No comments

Some time ago I published Flipr source code:
Rainer Brockerhoff wrote:

…a category on NSWindow to flip from some window to another window.

I’m not sure how many people adopted it, but the nice Karelia folks are using it in the upcoming iMedia browser. A few days ago they asked me to look into a “hesitation” effect which could be seen in the first frames of the animation in certain circumstances, and it’s now fixed… so if you used it somewhere, download it again (and tell me).

Nearly a month ago I wrote:
Rainer Brockerhoff wrote:

So, I’m a 100% percent sure nobody will be able to unlock the iPhone or run third-party applications on it unless Apple opens it up. Here’s why: ARM’s TrustZone

It’s hard to believe Apple didn’t want to take advantage of TrustZone at all, unless the intention was to publish a complete SDK later. Or perhaps only parts of the hardware are protected; the radio and the camera are possibilities.

A SIM hardware unlock hack was published a few days ago, and today Engadget wrote about two software unlocks. There’s no real confirmation on these yet but I no longer doubt it’s possible. I gather that people managed to write software to clear certain parts of the firmware flash RAM.

To me, this shows conclusively that Apple elected not to use TrustZone at all – just as they, in the past, elected not to use the TPM chips on the first Intel Mac motherboards to lock down Mac OS X to Apple machines. About the latter question, of course we’ll have to wait until the Leopard GM release comes out to be absolutely sure, but I haven’t heard anything about Leopard breaking new grounds regarding such protection. On the other hand, while there are groups of people still busily adapting every new Mac OS X release to run on “generic” PCs, they still seem to be very much in the minority – and for a reason. Normal users want support and Apple hardware quality without having to do complicated hacking and installing.

Coming back to the iPhone, on reflection it makes some sense for Apple to not do an unbreakable protection. Under the current situation, every iPhone software update is a single package; I understand that all apps are updated at the same time and everything except the user’s data is wiped and reset. This allows Apple to ensure that all versions of official software mesh with each other and also gives them the freedom to radically change the system, if necessary, without anybody noticing. Also, this means that the first item in any support procedure will be a reinstall, meaning Apple doesn’t need to worry about what the user may have installed; they’ll have to re-hack again later.

I’ve also heard from people who know people close to the iPhone team that all these efforts are closely watched. No doubt Apple saves some time and money, even if indirectly, by the current situation; it would make securing some future version easier should they deem it necessary.

I also think that not having an iPhone SDK available immediately will have been good in the long term. It’s helping Safari gain browser- and mindshares, and it’s allowing the iPhone’s OS X and built-in applications to become more fully debugged without Apple having to worry about keeping legacy APIs around for prematurely released 3rd-party applications. Yes, those apps will be released with a larger delay than people expected, but they’ll rest on a better foundation. With the hacker’s development toolchain becoming more polished there are now some 3rd-party GUI apps being released, and of course Apple will be adopting some ideas for its own apps and SDK (even in the negative sense of making sure they’ll be doing something differently).

From what I’m seeing, AT&T will be the loser in this situation. Apple will sell some more iPhones – probably not in statistically significant numbers at first – but AT&T will lose some contracts. Apple can demonstrate that they did a reasonable effort to prevent that, and it may not even be illegal for someone to unlock their own phone (it’s probably illegal to set up a business unlocking other people’s phones though). So, AT&T will lose some business to other carriers, as they do with other phones.

I usually don’t believe in sinister Apple agendas, but this may qualify… icon_smile.gif

So, I’m a 100% percent sure nobody will be able to unlock the iPhone or run third-party applications on it unless Apple opens it up. Here’s why: ARM’s TrustZone. Ehrm, make that 90%. I mean, it’s still quite unlikely. Well, OK, they can hack the serial interface in the connector but that can’t write to the screen. Well, let’s say 50-50. Of course, they can run stuff but not touch the network interface – OK, it seems they can. But never run a GUI app! Oh, they can now? But aren’t the binaries signed? No. Heh…

That’s about how I felt while writing an article for MAC+ (the upcoming print issue, which went to the printer a few days ago, around the “but never run a GUI app” phase. Well, today I see they (“they” don’t want people to link to their Wiki, but it’s easy to find on Google) succeeded in building a standard GUI app and display a screen on the iPhone. Must be Clarke’s Law in action – even though I’m not that elderly, hmpfh. Writing about moving targets is hard.

So what’s left? Of course I don’t have an iPhone myself here and I don’t have any privileged info on its architecture. I did hear over the grapevine that the Apple iPhone is following these issues with great interest and is working on updates – whether they’ll make a point of plugging these hacks is anybody’s guess. At the time I’m typing this, accessing the cellphone radio and unlocking the SIM card mechanism is still not possible.

Does that mean Apple didn’t bother to implement the TrustZone technology? I still maintain it’s impossible to crack from outside using present technology. The firmware is contained on the CPU chip itself, the implementor can restrict access to certain peripherals, decryption can happen entirely within the trusted zone, and the firmware can elect to run only signed binaries. There are some 1024-bit RSA keys in the iPhone which supposedly are still a few years away from being cracked, and in any event could be switched to 2048 or 4096. The barrier is even stronger than it was on the first Intel Macs, which had a TPM chip onboard (the last versions don’t and it seems Apple never used them) but separate from the CPU.

It’s hard to believe Apple didn’t want to take advantage of TrustZone at all, unless the intention was to publish a complete SDK later. Or perhaps only parts of the hardware are protected; the radio and the camera are possibilities. For sure they didn’t implement the usual Unix protection, where the root account can do everything; all processes on the iPhone run as root anyway. Looking at the current iPhone libraries there’s a “lockdown” library which most applications link against. It seems to check the aforementioned keys and confer privileges to access some likely-sounding sectors of the system. Having a non-standard security system is obviously an attempt to throw off people who expect 99% of the cracking to involve getting root privileges. I don’t have the tools to ascertain whether the lockdown library does in fact invoke TrustZone at a lower level, and much of this may change anyway for the next software update.

Speaking of which, from what we can see of the iPhone software the update process will involve a complete replacement – no partial updates here. My guess is that updating will also be mandatory, with iTunes updates being published simultaneously. Replacing all software at once of course makes sure that everything works together, but it would also allow Apple to change everything at once. We’ll know in a few months, I’d say.

One of the salient points repeated at the WWDC keynote was Leopard‘s support for “64 bits top to bottom“. However, a close peek at the slide shown this year showed a subtle difference to last year’s – the word “Carbon” was missing. Of course a storm of confusion soon ensued, with the usual wailing and gnashing of teeth from some quarters and polite shrugging from others. Apple stock fell and rose again, some developers professed bliss while others threatened to leave the platform, non-developers wrote learned analyses about obscure technical points, not to speak of reports of raining frogs or even an unconfirmed Elvis sighting in a Moscone restroom. Allow me to try to explain all (well, Elvis excepted).

First of all, there are a few implications in moving an operating system to 64 bits. I hear that Windows Vista comes in distinct 32-bit and 64-bit versions and that the latter is able to run 32-bit applications (with some restrictions) inside a compatibility box. In contrast, Leopard uses Apple’s experience with architectural migrations to support 32 and 64 bit applications natively on both PowerPC and x86 architectures – not so easy in the second case, but necessary since nearly all currently shipping Macs use Intel’s Core 2 Duo, which is 64-bit capable.

For this, Apple took advantage of Mach-O’s support for “fat binaries” – in this instance called “obese”. Obese binaries contain four different executables: PowerPC 32, PowerPC 64, x86 32 and x86 64. When running one of these applications, the system selects the best supported architecture and links the application to the corresponding (and equally obese) system libraries.

Enter the Carbon vs. Cocoa question. Cocoa APIs are derived from NeXT’s software and are called, usually, from Objective-C. Carbon APIs, to be called indistinctly from C, ++ or Objective-C, were first introduced in Mac OS 8.5 or thereabouts and were, themselves, a much-needed simplification of the “Classic” Mac APIs. Carbon was thereafter positioned as the way to port existing applications to Mac OS X, while Cocoa was supposed to be the right way to write new applications for the new system. No doubt the old NeXTies inside Apple pressed for Carbon being excluded from the start, but Microsoft, Adobe and Macromedia (to quote just the big companies) didn’t want to recode everything on short notice.

A necessary sidenote: the exact definition of “Carbon” is surprisingly hard to pin down, even among experienced developers. Here’s my own (although I’ve never written a Carbon app myself). There are Carbon APIs and Carbon applications. A Carbon application, for me, uses the Carbon Event Model – calling Carbon APIs to get events from the system. Until recently, a Carbon application would also, necessarily, use Carbon windows and the GUI widgets for those, mostly contained in the HIToolbox framework. Starting with Tiger it’s possible for Carbon applications to use Cocoa windows containing Cocoa GUI widgets, with some contortions of course. Other Carbon APIs – like the File Manager, or QuickTime – can be called indistinctly from Carbon or Cocoa applications.

Here’s where things started going awry, from the standpoint of established or multiplatform developers. Apple has always been of several minds about Carbon policy – it was often dismissed as a temporary “transition” technology, while people who interfaced with those developers had to reassure them that Carbon was not going away anytime soon and was not a second-class citizen. Porting software from the Classic Mac OS to Carbon wasn’t always easy; some larger applications took over a year. At the same time, it was seen as being much easier than tossing the whole codebase and recoding in Objective-C/Cocoa.

Now, a few years after Mac OS X was introduced Microsoft, Adobe and so forth had a substantial investment in maintaining parallel codebases for their Carbon applications and, understandably, began dragging their feet about converting to Cocoa at any time soon, or even at all. Due to pressure from these developers the Carbon GUI APIs began to incorporate new elements present only in Cocoa until then, and to all appearances Carbon and Cocoa were now positioned as equal and parallel APIs. In secret, of course, Apple hoped that “those people” would sooner or later see the light and begin doing their next x.0 version in Cocoa. In turn, “those people” harbored serious doubts about Objective-C (seeing it as a dead language with an unreadable syntax) and secretly hoped Apple would “recode Cocoa in C++”. Here’s a significant e-mail from an Apple engineer to the carbon-dev list:

No one reading this list should be under any illusions about Apple’s use of Objective C. Apple really likes Objective C. There are a lot of third-party developers who are using Objective C to program for Mac OS X and who really like it. Apple is not going to stop using Objective C. I’m not making a value judgement here, just stating a simple reality that everyone needs to understand. Do not think that someday Apple will “wake up” and realize that it would be better to recast all of our APIs in C++. That’s not going to happen.

So then came the PowerPC/Intel transition. Cocoa developers already were using Xcode, while many Carbon developers still were using the defunct Metrowerks CodeWarrior; transitioning large codebases to Xcode proved to be cumbersome. Still, people threw in more person-years to bring their apps up to the new standard. Then, at last year’s WWDC, Apple announced the migration to 64 bits, taking the opportunity to remove all legacy, obsolete or deprecated APIs from the new frameworks. Some Cocoa APIs were removed but, again, Carbon developers had more work to do. So once again, more person-years of work were invested.

It now seems that someone in Apple engineering management decided that they couldn’t afford to keep supporting two separate-but-equal APIs anymore, and the “transition” policy was revived regarding 64-bit Carbon applications. From what transpired during WWDC I deduce that some more of the Carbon APIs were taken off the “supported for 64-bit” list, most notably the part of the HIToolbox that concerns Carbon windows and GUI widgets. Therefore, 64-bit Carbon applications would seem to be either not supported at all, or supported only in a transition mode that used Cocoa windows and GUI widgets.

Naturally, Carbon developers were very bitter about this, while some Cocoa developers were asking if their 64-bit Cocoa apps would be able to call normal Carbon APIs (the answer is yes). So far, the most complete explanation I could find is this one (from the same engineer):

Fundamentally, Apple engineering is focused on Cocoa much more than Carbon, and Apple’s engineering management made the decision to un-support 64-bit Carbon to emphasize that fact.

So there you have it. Summary: 32-bit Carbon stays where it is and works fine until further notice – I don’t think they’ll be “deprecated” any time soon. The Leopard Finder itself is still a 32-bit Carbon application! Not until Mac OS 10.6 (LOLCAT, or whatever they’ll call it) comes out, which may take 3-4 years at least, and probably not even then. But 64-bit pure-Carbon apps may be unsupported, or even not run properly, when Leopard comes out in October. Cocoa isn’t going away, and is the future. Has been the future since Mac OS X 10.0 came out, in fact. On the other hand, there’s a migration path – use the Cocoa GUI, then later convert to a Cocoa app. People who have invested a lot of time in Carbon feel really bad about this, and I agree Apple mishandled this badly from a PR standpoint. On the other hand, investing a lot of time in Carbon is now revealed to have been a throw-good-money-after-bad move; some people say “I told you so”.

The final question is, how come neither Microsoft nor Adobe are screaming their heads off about this? While I was wondering about this, I realized that, for normal Mac users, Microsoft Office really doesn’t handle data sets big enough to need 64 bits; they can stay on 32 bit as long as it exists. As for Adobe, at first glance, Photoshop at the very least is just begging for 64 bits… really? Here’s what one Adobe engineer says:

I could have spent this whole cycle moving us to 64 bit rather than working on startup time, but would that give you more of what you want? Add 20 seconds to the startup time you are seeing for the beta for all versions/platforms of Photoshop and compare the value of that version to one where the histogram would be 10% faster on 64 bit machines (and most of the rest of Photoshop being 5% slower). It is true, there are some things, like histogram, that would be 10% faster, I wrote the code to verify this. But, the rest of the product would have been slower without a few people spending the whole cycle going over all of the slow parts and bringing them back to where they were on 32 bit. Most operations on a 64 bit application like Photoshop are actually slower by a small amount if time isn’t spent optimizing them.

Read the excellent comments on that post, especially the more recent ones, for much more discussion of the details on the Photoshop side – I suppose many of those would apply equally to other large Adobe/Macromedia apps.

So there you have it.. the big guys don’t need to move up for now. The small guys are mostly in Cocoa already. Unfortunately, the intermediate cases have fallen into the crack for now – think multiplatform CAD software for instance. It’d be very sad to see them leaving the platform in a huff about this; I sincerely hope Apple will contact all of them privately and smooth things over for now, somehow, though I can’t really imagine how. Maybe they’ll even re-add support in October, now that the point has been made.

Update: fixed a misconception about the PowerPC->Intel migration, see explanation above.

One significant announcement (some say the only one) at WWDC is the Safari 3 Beta, which includes Safari for Windows; at least it’s the one I’ve seen the most varied interpretations of, so far.

Considered as a beta release, Safari 3 is so-so. The Mac version needs a reboot because it also substitutes the Dock (which runs Dashboard widgets) and the system-wide webkit. It also substitutes the standard Safari installations. I had to reinstall Flash afterwards to get some sites to work. For my usual sites, it performed quite well although I had one crash. I don’t have a XP machine to test the Windows version on, but I hear it’s unusable on non-English versions, and very flaky on most English systems as well.

Steve Jobs stated the primary intention was to widen Safari’s marketshare, and the demo concentrated on a supposed serious speed advantage on Windows – “more than twice as fast as Internet Explorer”. And then, in the “one last thing” section, he refers to a “very sweet solution” for developing apps for the iPhone : the full Safari engine, no SDK needed allows Web 2.0/AJAX applications. (The entire section was received with silence by the crowd.) Steve’s statement that this is “a very modern way to build applications” somewhat contradicts what he said at D5:

…I love Google Maps, use it on my computer, you know, in a browser. But when we were doing the iPhone, we thought, wouldn?t it be great to have maps on the iPhone? And so we called up Google and they?d done a few client apps in Java on some phones and they had an API that we worked with them a little on. And we ended up writing a client app for those APIs. They would provide the back-end service. And the app we were able to write, since we?re pretty reasonable at writing apps, blows away any Google Maps client. Just blows it away. Same set of data coming off the server, but the experience you have using it is unbelievable.

And you can?t do that stuff in a browser.

So people are figuring out how to do more in a browser, how to get a persistent state of things when you?re disconnected from a browser, how do you actually run apps locally using, you know, apps written in those technologies so they can be pretty transparent, whether you?re connected or not.

But it?s happening fairly slowly and there?s still a lot you can do with a rich client environment.

So here we have at least two apparent intentions: get more penetration in the global browser “market” (maybe “mindshare” would be a better term as they’re nearly all free for the end-user), and open up iPhone development for Windows owners. Both sound logical.

More market penetration would surely be good for Apple. As John Gruber notes, Apple gets income from the Google search bar – tens of millions of dollars per year isn’t bad. And having Safari available on Windows removes one lame excuse for webmasters that build sites that don’t render properly (or at all) on Safari; it’s no longer necessary to own a Mac for checking that out.

Speaking of rendering properly, Safari for Windows, or rather WebKit, includes the Lucida fonts and several low-level frameworks, among them CoreGraphics, ColorSync, ImageIO and CoreFoundation. Some people believe this is a first step towards reviving the Yellow Box for Windows idea, but Cocoa is much larger than that… Safari is just a relatively thin shell around WebKit, and the Windows version shows no signs of being written in Objective-C, for one. Of course many people are once more complaining that Safari for Windows renders fonts differently. Joel Spolksy explains:

Apple and Microsoft have always disagreed in how to display fonts on computer displays. Today, both companies are using sub-pixel rendering to coax sharper-looking fonts out of typical low resolution screens. Where they differ is in philosophy.

– Apple generally believes that the goal of the algorithm should be to preserve the design of the typeface as much as possible, even at the cost of a little bit of blurriness.

– Microsoft generally believes that the shape of each letter should be hammered into pixel boundaries to prevent blur and improve readability, even at the cost of not being true to the typeface.

I’ve talked to several people about this issue. Beyond the expected bias of familiarity – everyone is used to their main working platform and finds the other’s rendering strange – I found that most graphic artists and font designers prefer the Mac rendering, while most web designers and IT people seem to prefer the Windows rendering.

But beyond that, the fact that Safari for Windows tries to reproduce exactly the Mac rendering is important (and not a bug, as many Windows users are claiming). I’ve seen this myself on my site; tweaking font size etc. so the page looks good on the Mac often produces quite different layout when you view it on a Windows browser, and it’s impossible to get it to look exactly the same, down to line breaks and text heights. This is doubly important when you’re viewing the page on a small screen like the iPhone has. Zooming the page display like the iPhone does seems to mandate the Apple rendering engine: Windows’ pixel alignment is counterproductive there.

Coming back to the “zero-cost iPhone (non)SDK” idea. Reactions in the developer community seem fairly mixed. At WWDC itself, of course, most developers aren’t web app developers, but were looking forward to doing Cocoa on the iPhone. And of course that implies that everybody thought that, when Apple would come out with an iPhone SDK (or even a generic OS X SDK, as I thought before the conference) Cocoa/OS X developers would have a monopoly… after all, they already own the development hardware and software. Nobody seriously believed that Apple would invest in doing a separate iPhone SDK that would include a simulator or even a compiler for one of the existing Windows IDEs, as Palm used to do when their products were still 68K-based (no idea what they do today).

Instead, so “real” Mac developers think, every newbie with a few weeks JavaScript under their belt are now free to declare themselves “iPhone developers”. It’s the same thing that happened with typographers when the original Mac 128K came out, and what will happen with animators when the final Leopard will come out – look for the equivalent of tags or impenetrable DVD menus in most of the new iPhone and Leopard apps. We’ll be pretty sick of moving GUI elements soon, and there’s no hope of standardizing web apps anyway. It’s the millennium of the amateurs… head for the hills!

Well, while I think some of it will be that bad – just as ransom-note typography was in the 80s, and garish pages assaulted us in the 90s (and still do, come to think of it), it’s won’t be all that bad. Apple will have new category for its Design Awards and there will be some cool, well-designed apps out. Let Darwin take care of the rest.

OK, here’s what I wrote a few days ago, regarding the Mac OS X transition to Intel:
Rainer Brockerhoff wrote:

…Most Cocoa developers that didn’t call Carbon frameworks to any great extent, or that didn’t have to deal with complex binary files, were able to recompile their apps into the Universal (“fat binary”) format in a few hours or days. In contrast, most developers of Carbon apps of any complexity faced months or years of conversion.

I invited comments on Apple’s carbon-dev mailing list, and some people objected to the paragraph quoted above. In particular, Apple engineer Eric Albert wrote:

I’d probably moved more Mac OS X code to Intel than anyone else before the announcement — some of the iApps and a bunch of other apps, plus a ton of work in the OS — and of the code I’d worked on, the Cocoa apps happened to take more work than the Carbon ones. That was really nothing more than coincidence because the Carbon apps I was working on dealt with structured data better than the Cocoa ones, but there’s nothing inherently more complex about the Intel transition for Carbon than there is for Cocoa. Mathematica, which is most assuredly a complex Carbon app, took four hours to get up and running on Intel, faster than any other app I’ve seen.

The primary reason for some Carbon apps taking a long time to move to Intel was that they weren’t using Mach-O or Xcode at the time the transition was announced and both were required for Intel support.

Carbon apps already using Mach-O and Xcode came over to Intel fairly easily.

Well, that’s quite definite; I was wrong there. I was wondering why, though, and here’s what I found: the RDF is to blame… icon_wink.gif

Here’s what Steve Jobs said at the WWDC 2005 keynote, where I was physically present, and quite near the front too:

So, let’s take a look at this again: Widgets, Scripts and Java just work. Cocoa apps, literally a few days and your Cocoa app’s going to be running with an Intel version. Carbon apps, it’s to be a few weeks, a few more tweaks, although there are exceptions to that although we maybe overstating it here, which we’ll see in a minute. And and in Metrowerks we don’t know, you’ve got to get to Xcode. So the key here is getting to Xcode.

And I distinctly remember the same point being made later in the reserved “state of the union” sessions: click a checkbox in Xcode, “boom” for Cocoa… not that easy with Carbon.

There it is then. I don’t have any Carbon apps myself, and didn’t have to migrate anything from Metrowerks CodeWarrior either, so I thought Carbon was to blame for stuff like the Adobe Photoshop delay. I’ll update my original post below; thanks to everybody who sent in comments.

Photos licensed by Creative Commons license. Unless otherwise noted, content © 2002-2026 by Rainer Brockerhoff.
Iravan child theme by Rainer Brockerhoff, based on Arjuna-X, a WordPress Theme by SRS Solutions. jQuery UI based on Aristo.