Solipsism Gradient

Rainer Brockerhoff’s blog

Browsing Posts tagged WWDC

When I began programming many years ago (now 55 and counting!), computing was in its infancy. We wrote programs on blue coding sheets, had them converted into decks of punchcards, and queued them on a shelf for “batch processing”. Usually the reward was a program listing with some obscure error messages like IEC107D, and I would mentally step through the program, repeating the process until being rewarded with a working “run”. I soon found employment at the local university, where more often than not I could run my program deck through myself and figure out things faster — and later on, in the wee hours, even use the huge mainframe computer as a primitive but enthralling personal device.

After some years the first video terminals came out, where you could view all lines of the program in glowing green rows, edit them directly, and enter the program into the batch queue. Was this the future? Well, there still was much to come. Very soon I realized I could buy one of those new Apple IIe computers along with a small TV and program/debug in the comfort of my home. And, miraculously, people actually paid me to sit at a computer keyboard and program.

And then the future arrived. When I saw the reports of the first Macintosh — a full graphics screen and a mouse — I knew that was it! The point-and-click user interface was the best. No more command lines. No more moving a clunky cursor around with the keyboard. It was heaven! And it only got better: color, larger screens, huge amounts of RAM and storage, networking! And then the internet came in, wonder of wonders.

It took years to evolve the expected standard behaviors of mouse gestures and UI conventions; application programmers had a library of items to use, so that pretty soon everything worked as expected and programs that flouted those conventions were downrated.

I was in the audience, in 2007, when Steve Jobs introed the first iPhone as a combination cellphone, iPod, and internet device. I wasn’t very impressed; I already had an iPod which I used little, had no plans of getting a cellphone (and indeed, held out until 2016 to buy one!), and the internet part was cool but the screen was small, the browser was limited and my laptop Mac had a much better feature set. OK, it wasn’t as portable but I always had it with me…

The real revolution here was the multitouch interface, an evolution of the point-and-click interface but with your fingers standing in for the mouse. It took years to evolve, and as with the Mac, Apple offered standard UI items to developers, which could then concentrate on functionality instead of reinventing the wheel.

But then in 2010 the first iPad came out and I bought one right away. Now here was a portable computer good enough to use as a personal device, and latter versions became more and more impressive. My iPad today is my main device for reading, listening to music and casual browsing, although I still fall back on the Mac for developing and writing longer texts. With my recent eye troubles I tend to fall back on my 77″ TV for watching movies, though, and programming has been curtailed.

Now the future has arrived again. I have no doubts that Apple’s Vision Pro — and, of course, “spatial computing” — is the Next Big New Thing.

Oddly enough, one of my arguments here is the sheer volume of vitriol about the device that one can find on the social networks: it’s unwieldy, it’s expensive, the external battery sucks, it’s been done already by others, it’s isolating and dystopian, it’s one more dragon in Apple’s evil ecosystem… Apple is doooomed, I tell you! Sound familiar? (And I’m not even on most of those networks!)

Hey, all those negatives were also rolled out in the past — and before we had Apple’s evil ecosystem, we had IBM’s evil ecosystem, remember? Every time such a futuristic device is sighted, it is clunky, expensive, power-hungry, and so forth. But also, if conditions are just right… the next version appears just in time, and it’s lighter, easier to use, and we can’t live without it anymore.

The real futuristic paradigm is now look-and-click, evolved from the old point-and-click way, and many other gestures are possible; standardisation is no doubt in progress. Why hold a mouse, or a control, or lift your hands unnecessarily, when you can convey all with small, subtle gestures? And our brains are evolved for a 3D environment — all our language and thinking is constructed around 3D metaphors.

Now, finally, we have a minimum viable system to explore 3D user interfaces. Discussing whether the Vision Pro is AR, VR or XR is besides the point; those are just implementation details and will evolve along with the UI.

Details on the hardware are scarce as I write this, and some may even be uninteresting — this thing is a self-contained computer on your face and the only relevant spec will probably be how much SSD space is left for user data. Everything else will be just good enough to be effectively transparent. And that’s the major point about the Vision Pro hardware: it’s a 3D input device for your brain, the first with sufficient quality to be transparent. Who needs holograms?

But, people will ask, what is the use case for this thing for the normal user/gamer/TV watcher/whatever? My answer: we don’t really know! Apple has, of course, selected some classic cases for the previous tech: FaceTime, games, 3D photos/videos, widescreen movies, Excel spreadsheets hanging in space (ugh), etc.; they had to do it, to get people’s attention. But between now and whenever this comes on the market in early 2024, developers will burn the midnight oil to build compelling use cases, most of which nobody (not even Apple!) had thought of before.

And this is where the Vision turns Pro. I believe that, rather than just designating a higher-capacity device compared to a non-Pro version, the Vision’s Pro name indicates that, at least until the 2nd or even 3rd generation comes out, this is a device for professionals. This is not (yet!) for casual gamers, zoomers or moviewatchers. Here’s a partial list I and a few friends came up with in a few minutes:

  • Architects, engineers, designers (as usual)
  • Doctors, dentists, psychologists, therapists and researchers in general
  • Educators and grad/postgrad students
  • Technicians in hitech fields like power generation, aircraft maintenance
  • Industrial applications are boundless
  • Astronomers, archaeologists, artists

And most of these can afford (or their companies can) the current $3499 price.

Com to think of it, I’ll probably buy two; it’ll still come in cheaper than paying for a huge 3D TV, multiple big screens/projectors and a couple of M2 Macs.

More as details emerge.

WWDC 2020 opens next June 22nd and all indications are that the highest-impact announcement will be the Mac’s migration from Intel to the ARM architecture.

While CPU architecture migrations are infrequent — they happen every decade or so — Apple has a good track record of pulling them off successfully.

The first major migration was the move from Motorola 68K to PowerPC chips around 1994, followed by moving from the Classic Mac System 9 to Mac OS X around 2000. Relevant here was that for some time Mac OS X ran older applications in the “Classic Environment”: a compatibility sandbox that emulated the APIs of System 9 and the instruction set of the 68K.

This worked reasonably well as PowerPC CPUs were several times faster than the old 68K ones. It also introduced the concept of “fat binaries“; the same application file contained code for both old and new environments.

A better historical precedent is the move from PowerPC to Intel processors in 2006. This was more traumatic for developers, as PowerPCs were “big-endian” and Intel CPUs were “little-endian”. This meant that, except for strings, values stored in memory, files or transmitted over networks had a different byte sequence ordering. To have the same program source code work on both systems you could no longer assume it would just work, but had to bracket your instructions with macros or function calls that would do nothing on one platform but swap bytes around on the other.

If you’re not an oldtimer like myself you probably never had to think about this — every Mac, iPhone, iPad, Apple Watch or Apple TV use little-endian values, and I even had to dig into documentation to make sure of it. ARM CPUs can be run in big-endian mode by setting a special bit at boot time but this is not the default, and no Apple device uses that mode.

Now, this meant that in 2006 developers could not just migrate their apps to Intel by recompiling; we had to look through every line to either check that it was endian-neutral, and if it wasn’t, those special macros had to be used. For people who had very CPU-specifically optimized code — perhaps even in (shudder) assembly language — separate code sections were necessary.

Having done all this, you recompiled your app twice; once for PowerPC, once for Intel; and the magic of fat binaries allowed you to ship it all in one app. Later on, some apps even needed 3 or 4 different code sections, depending not only on endianness but also on whether they would run on a 32- or 64-bit CPU!

Another — today mostly forgotten — aspect was that Apple prepared for the Intel migration by gradually modernizing and building their developer toolchain in-house. LLVM, Clang, LLDB etc. allowed Apple to ensure that, for whatever CPU they wanted to support, compilers were ready beforehand and could be optimized continuously later on, without depending on outsiders.

Still, in 2006 Apple had to ship special hardware, “Developer Transition Kits”, to select developers for testing. For software that couldn’t be converted to the new architecture, Apple introduced a limited compatibility box: Rosetta. If I recall correctly, it did on-the-fly translation of PowerPC code into Intel code, which was then cached. Because of its limitations it didn’t work for many larger applications and was soon phased out.

Moving in parallel to the PowerPC to Intel migration was a slower-motion shift in operating system APIs. Most notably, this involves Carbon and Cocoa.

Carbon was a C-based API introduced in 2000 to ease migration from Classic System 9 to Mac OS X. Cocoa, introduced around the same time, was an Objective-C based API for modern object-oriented programming, itself an evolution of NeXT’s OpenStep system. Underneath both APIs, in the now well-known layer model, was Core Foundation, which could be used from both types of apps; and some apps (like my own) could mix calls to both APIs with some care.

Not too long after the Intel migration, Apple announced that 64-bit was the future, and that Carbon would not be migrated to that environment. This process was stretched over several years and involved redefining what APIs were really considered “Carbon”; some, like the File Manager, were “de-carbonized” and lived on until macOS 10.5 (Catalina) came out.

Cocoa, on the other hand, continues to be used everywhere in macOS. The Finder, the Dock, Xcode, and Safari are all Cocoa apps. Even when Swift came out a few years ago most of it was built on top of Cocoa and Objective-C objects; the notable exception is the Swift toolchain itself.

So, after all this, here we’re looking at Yet Another Hardware Migration for Macs. Let’s look at the implications.

Economically, it makes sense for Apple, as many others have already commented. They’ll no longer be bound to a foreign evolution roadmap on which they have little influence. They have extensive experience in producing high-performance, low-power CPUs for their mobile devices, and the latest versions already outperform Intel in some situations.

Technically, it’s a huge win. Switching to ARM64 — and not just the standard ARMv8.x architecture licensed from ARM, but with their own, extensive modifications — will allow them to have unified GPUs, Neural Engines, memory controllers and so forth on all their line, with more uniform device drivers and low-level programming.

For 99% of developers, I think nothing will change. The new chips are little-endian also, so a simple recompile will have Xcode produce a fat binary for the new Macs which should run outright. Of course, if you have assembly language sections in your program and/or write kernel extensions/device drivers, time to learn a new architecture…

Snags will come for people who dislike, or can’t use, Xcode. Some have to use Intel’s compilers, for instance; I know too little about such cases to have an informed opinion, sorry.

Some pundits seem to expect a sudden concurrent change in macOS; something like Objective-C and/or Cocoa being obsoleted in favor of Swift and SwiftUI. Or even the Mac going away entirely, some sort of huge desktop iPad taking its place. In my view this won’t happen. For one, what would developers or even most Apple engineers use for development?

A big question is: will Apple be able to provide an Intel compatibility box on the ARM Macs? Certainly Boot Camp will not be available. Running a virtualizer like VMware Fusion or Parallels seems almost as difficult, unless the new CPUs have some sort of hardware assist to decode x86-64 instructions. This may not be as outlandish as it sounds; current Intel/AMD processors already break x86 CISC instructions into RISC micro-operations which are then cached and executed by the “inner” CPU. This is a gross oversimplification but in theory nothing — except silicon space — bars Apple from breaking x86 instructions into ARM instructions.

A Rosetta-like box seems more feasible for running individual Intel applications, but who needs that? Game users? Performance will be limited. Most virtualizer app users want the complete OS running and with native speed. Linux/BSD might be available soon; perhaps Windows for ARM.

But what about Catalyst, some of you may ask? Here I can only shrug. In its present form it certainly is not an important future technology for macOS. While simple apps can be done with it — perhaps purely for the benefit of developers unfamiliar with AppKit — can you envision a Catalyst Finder? SwiftUI is still very new and primitive, and will continue to be layered on top of AppKit/UIKit for some time. They may merge in the future, or be renamed gradually like Carbon was, but that’s a long time out.

Finally, hardware. I don’t think the existing A13 SoCs would be applicable to any Mac, though. Some version of the Mac mini would be the obvious candidate to be the first to get the all-new CPU. It would then percolate up through the laptop line and the iMac. In these cases, reduced power usage would be a bonus — even for the iMac, it would mean a smaller power supply, less heat and a thinner enclosure.

The Mac Pro should be the last Intel redoubt. Multiple CPUs, OEM graphic cards, generic PCIe cards — Apple will have to address a huge range of problems there and this will take years.

Enough handwaving for now; the usual disclaimers apply and I’m really looking forward to the keynotes next week.

Update:
— corrected date for the 68K->PowerPC migration. Thanks to Chris Adamson for catching the error.
— fixed some awkward language about virtualization. Thanks to Maurício Sadicoff.

Over two weeks ago, Apple at WWDC announced something entirely unexpected: thousands of new APIs and a brand-new programming language, Swift. No hardware, of course; it’s a developers conference, remember?

Reactions varied all over the spectrum. Non-developers (especially “industry analysts”) mostly had no idea what it meant: they said Apple had announced “nothing”. Almost all developers, however, were ecstatic — “the most significant event Apple ever staged“. Regarding Swift, this initial enthusiasm diverged as soon as people read the (relatively sparse) documentation and actually began to play around with the language — a very early beta version was available for download soon after the announcement. Hilarity, chaos and pandemonium ensued; tension, apprehension and dissension had begun.

As usual, almost everybody tried to project their grievances, expectations and experiences onto the new language. The open-source advocates griped that no source was available. The cross-platform advocates complained that there was no version running/compiling for Android (as if Apple would have any interest in promoting that!). The Objective-C programmers unsuccessfully tried to translate their code into Swift and complained that there was only limited dynamic dispatching and introspection. The C programmers complained that there were no preprocessor macros and that Swift seemed to be “Objective-C without the C”. The Haskell/Erlang/Scala programmers complained that many functional programming facilities were missing, and that the language was “too mutable”. The Java programmers complained that the language was “too C++-like”, but resented the lack of exceptions. The C++ programmers also resented the lack of exceptions and wanted std::somethings. The Type Theorists complained that generics were “not generic enough”. JavaScript programmers… well, you get the idea. Almost everybody complained about Array mutability semantics, about missing semicolons and the parsing of whitespace, and (of course) said that the syntax “looked weird”. Serious fights erupted on Twitter, disagreeing on whether Swift was a “modern” language and what Apple’s intentions were.

And, as always happens, many people said, in effect, “OMG Apple you’re soo stooopid WTF fix this now!”. This is the usual symptom of looking at the surface and not understanding what might be happening underneath.

Voluminous disclaimer and sidenote with historical digressions:

Many of the complaints in the paragraphs above are condensations of what I understood people to be saying and none are meant to be actual live quotes — which is why I didn’t link to any specific instance. I’m not interested in discussing most of these personally right now, thank you.

I’ve been programming since 1969,  in C since 1984, in Objective-C since 2000. I wrote only one application in C++ back in the Classic days — it was pretty much mandatory in the CodeWarrior/PowerPlant days. I did my CS degree in the early 1970’s, when “modern” language still meant ALGOL 68 – see the mind-boogling official reference (large PDF).

When BYTE Magazine‘s special Smalltalk issue came out in 1981, I was very interested, but couldn’t come to grips with the weird syntax. I bought Adele Goldberg‘s classic books about Smalltalk — the blue book (large PDF), the orange book (large PDF) and the green book (large PDF) — and periodically tried to understand them; very difficult without access to a working compiler! In the late 80’s I put these aside (and, unfortunately, lost them in a move). After Apple acquired NeXT in 1996, I became aware of Objective-C’s roots in Smalltalk, but didn’t give it much thought.

Around 2000, restarting my work as an indie developer, I started programming in Objective-C and Cocoa. As an experienced C programmer I had little difficulty with Objective-C, and quickly got used to the nested [[ ]]s. I never wrote a full Carbon app as such. I also never managed to acquire a working Smalltalk compiler, even after a few became available on the Mac. However, a couple of years ago, I found the Smalltalk books in PDF format (as linked above) and was astounded: the formerly opaque things about methods, messages, dynamic dispatching, objects and so forth — suddenly all was clear and obvious! That’s the advantage of using-while-learning, at least for me.

Unlike many colleagues I never hesitated to go beyond Cocoa, always using CoreFoundation, BSD/Darwin and a variety of interfaces according to necessity, and once manual memory management became ingrained, tossing objects and buffers back and forth between the various APIs. Except for short utilities for my own use, I haven’t adopted ARC yet — I found too many edge cases for my established programming habits.

 So, back to Swift. It really appears to be a very pragmatic language. If you look at the generated library header (in Xcode, command-doubleclick on any Swift type to see it), nearly all operators and types are defined there, in often surprising detail. In other words, few language features are hard-wired into the parser/compiler – the Swift library/runtime and the pre-LLVM optimizer are, instead, responsible for the language and its implementation details, and therefore more easily twiddled if necessary.

This is, of course, very convenient for Apple: a small team could tinker around with all aspects of Swift while leveraging most of the existing LLVM infrastructure and keeping up with the latest changes in iOS and OS X. Indeed, in retrospect, it appears that Swift was even driving many of those changes!

Let’s look at a brief timeline to explain what I mean:

  • 2000-2002: Chris Lattner‘s masters thesis on LLVM;
  • 2005: Lattner hired by Apple; Apple uses LLVM for the OpenGL shading language in Mac OS X 10.5;
  • 2006-2008: Apple introduces experimental llvm-gcc in Xcode 3.1; “blocks” and GCD appear;
  • 2009: Apple introduces Clang as an alternative for gcc; OpenCL and Clang static analyzer appear;
  • 2010: Lattner begins working on Swift; Clang fully supports C++ and llvm-gcc is the default compiler;
  • 2011: gcc/gdb are discarded, Clang/lldb are defaults, ARC introduced in Xcode 4.2;
  • 2012-2013: iOS/OS X are fully built with the new infrastructure, Objective-C literals in Xcode 4.4;
  • 2013: Lattner becomes head of the developer tools department;
  • 2014: Swift comes out in Xcode 6.0.

The LLVM team (Lattner, Evan Cheng who is also at Apple, and Vikram Adve of UIUC) also received the 2012 ACM Software System Award, and of course, LLVM, Clang, LLDB are open-source projects being driven forward by many people who also deserve lots of credit.

Nevertheless, it’s tempting to see all this as Chris Lattner’s plan for world domination… just picture him stroking a white cat and going “mwahaha!” 🙂 [Update: Thanks to @darth for the illustration!]

But really, all this points to progress in Apple’s platforms being driven by a consistent plan to modernize and implement new technologies everywhere; even hardware was affected, as the Apple A6 CPU (and no doubt its successors) were designed in parallel with the corresponding LLVM code generator. Similarly, from 2009 forward, software advances like ARC, blocks, GCD, runtime modernizations etc. are now seen as preparing the ground for Swift at all levels.

Sidenote:

A few years ago I posted about Apple’s hardware options being enabled by LLVM, and with the A6 that has indeed begun to happen. Apple’s in position now to design their own CPU and just have to write a new optimizer backend for it — and switch architectures in new hardware without users, or even developers, noticing any significant change.

When I began studying programming languages and compilers, UNCOL was the holy grail of programming:  a universal intermediate language to adapt any high-level language/compiler to any machine architecture. LLVM is the first implementation of that.

What does all this mean for Swift? Contrary to what you may hear from some quarters, it’s not an amateurish, ham-fisted attempt at locking developers in to Apple’s “walled garden”. As Apple has said publicly, it’s a systems programming language that ties in to key Apple technologies. I don’t doubt that it’s already being deployed internally and we can expect to see key low-level frameworks — Security, dyld, IOKit are candidates which come to mind — rewritten in Swift as soon as feasible. In the long run, the kernel itself, Core Foundation and others may follow suit; picture “SwiftKit” unifying much of AppKit and UIKit. Making Swift available to developers at this beta stage is good policy but probably not Apple’s primary focus.

But, you may ask, why not use C++ or the hybrid Objective-C++? Why not use a “modern” cross-platform language? What was wrong with Objective-C anyway?

Well, there’s a reason so many low-level frameworks are written in C++ or pure C: runtime speed. Objective-C’s dynamic dispatching has vastly improved over the years but is still a bottleneck, and in 95% of cases is not really necessary — we rarely use id, and strong typing is encouraged everywhere. As for pure C code, when you look at it, there’s always tons of crufty #defines, tricks to avoid C’s legacy problems, spinlocks and stack arrays and overflow checks and… so it’s no wonder Apple decided to start anew with a new language that avoids all of those problems and still interoperates with Cocoa etc. — all while the infrastructure’s being changed underneath.

So, why not C++? Lattner is a C++ wizard, right? All of Clang/LLVM is coded in C++. So is WebKit, Apple’s other major open-source success. I can’t see that changing, and their effort to fully support all of C++’s experimental future features argues that it won’t change. But C++ doesn’t look like a good match for internal Apple technologies like GCD and ARC, and the C++ Standards Committee is certainly not interested in adopting those. On the other hand, judiciously adopting certain things like generics, operator overloading and optimized dispatching is certainly a good thing. And last but not least, Apple now owns/controls the entire toolchain and the systems programming language!

More later; I’ve started to write an entire application in Swift and after that may feel qualified to comment on language details. For now, I’m quite happy with the prospects.

If you missed it, here’s part 1.

Now, as I said, hardware details are becoming interesting only to developers – and even we don’t need to care overly about what CPU we’re developing for, now that we’re used to both 32-bit and 64-bit, big-endian and little-endian machines. (Game developers and players, of course, are a different demographic.)

As Steve Jobs said, it’s all about the software now. Here, too, too much emphasis on feature details can be misleading. I don’t really care whether Apple copied the notification graphic from Android, or whether it was the other way around. What’s important is that user interfaces are evolving by cross-pollination from many sources, and this is particularly interesting regarding iOS and OS X (note that the “Mac” prefix seems to be on its way out).

The two operating systems have always have had the same underpinnings in BSD Unix/Darwin and in several higher layers like Cocoa and many of the various Core managers. In their new versions, APIs from one are appearing in the other, and UI aspects are similarly being interchanged; compare, for instance, the Lion LaunchPad against the iOS SpringBoard (informally known to iOS users as “the app screen”).

Apple is not “converging” OS X and iOS just for convergence’s sake. Although desktops, laptops, tablets, phones and music players are all just “devices” now, the usage and form factor differences must be taken into account. Remember Apple’s 2×2 product matrix some years ago: desktops and laptops, consumer and pro machines? It hasn’t shown up lately, and we really need a new matrix; the new one should probably mobile and fixed, keyboard and touchscreen.

Don’t be misled by appearances! Yes, the LaunchPad looks like SpringBoard, but that doesn’t mean that we’ll have touchscreen desktops soon – rather, both interfaces are, in fact, a consequence of the respective App Store, being an easy way to show downloaded apps to the lay user. Apple is, however, exploring gesture-based interfaces and no doubt we’ll see the current gestures evolving into a universal set employed on all devices, the same way common keyboard shortcuts have becoming engrained. A common thread here is that hardware advances like touchpads, denser and thinner screens, better batteries and faster connections are becoming the main innovation drivers technologies, like processor speed and storage size used to be.

A subtle and very Apple-like aspect of this sort of convergence has become visible when the iPad came out. While some scoffed that the iPad was “just a larger iPod Touch”, in fact the iPod Touch had been, all the time, just a baby, trial-size version of the iPad! The Touch, the iPhone, and even the older iPods were an admirable way of getting the public used to keyboard less interfaces, and the iTunes Store was a similar precursor to the App Store. This means that when the iPad came out there was a legion of users already trained to its concepts and interface; an excellent trick, and one that only Apple could pull off.

Now we see that, in a similar way, the iPad and its smaller siblings are preparing the general public to migrating to larger, more powerful, devices which look comfortingly similar in many ways. Few consumers think of their iPhones or iPods as computers, even though they’re as capable as the supercomputers of 15 or 20 years. Now that desktops and laptops are just devices – and you won’t need a so-called computer anymore to set up your smaller devices – very soon this new class of “devices with keyboards” won’t be thought of as computers either, and the term will be used only for servers and mainframes, as it was in the old days.

I, for one, welcome our new post-PC overlords… 🙂

Whew. It’s over. This year’s WWDC was certainly one of the most intense – and one of the most promising – that I can remember.

It started on a somewhat sad note: Steve Jobs was clearly unwell, moving slowly and his voice was unusually weak. He was onstage for only a few minutes before turning things over to his VPs. Then, towards the end, when he talked about iCloud (which is obviously something he put a lot of energy into) he got better. Still, I have a feeling that this might have been Jobs’ last keynote; he’s clearly not going to be around for many more years. However, the general impression I had was that everybody feels that Apple is now synchronized enough with his mindset to go on indefinitely without him.

This WWDC was clearly about pointing out future directions. And it was all about software. Many commenters are pointing out this or that added feature in (say) Mail, or in the Finder, or in iOS 5. Others are bemoaning the lack of hardware announcements. Of course there have been the usual comments about Apple copying this or that from Windows. Still others are gleefully pointing out how the iPhone 5 (for instance) has been delayed – this when it hasn’t even been announced, but they apparently believe in each other’s rumors!

Let’s put the hardware issue away first. As I’ve been saying for a couple of years now, Apple’s heavy investment into Clang, LLVM and connected technologies like LLDB is now paying off. This trio will very soon be Apple’s main developer tools backend. They’ll be free from overweight, ancient, license-encumbered stuff like gcc and gdb and the results are very encouraging. Without going into details (NDA ahem), suffice it to say that fellow developers have seriously agreed with me that the new tools are better – this or that detail notwithstanding – than anything else on the mobile or desktop market today.

Also Apple is now free to make hardware details irrelevant. When Apple switched to Intel six years ago, I wrote:

Winners:

  • Apple, of course. As I commented below, they’re free (or will be, in a year) of the CPU-architecture-as-a-religion meme. They get a literally cool CPU/chipset for their PowerBooks; although I suppose they won’t use that name in the future; how about IBook icon_wink.gif? They get dual-core CPUs right now, and a 64-bit version in the future.

And this is still true. At that time, too, some people saw Apple “imitating” the Wintel machines by adopting Intel CPUs as a negative thing (or even as positive, depending on their bias). Now with Clang/LLVM becoming Apple’s mainstream tools, they could switch CPUs anytime without users noticing; the new Intel-based Macs were still normal Macs, and normal users didn’t care which architecture the ran on. And indeed, lately rumors have abounded about ARM-based MacBooks. But, as I wrote at WWDC 2005:

…it’s a new type of freedom. Freedom of architecture. IBM underperformed, they’re out; at least for now. Intel works better now, they’re in; at least for now. Next year, some other chip may be hot, Mac OS X will be on it, and recompiling will be even easier. We’re free!

For the normal user, hardware specs aren’t that important anymore beyond a certain threshold – if they’re sufficient for the job, the details are unimportant. As the old joke goes:

A lady stands in front of an enclosure in the London zoo and gestures towards one of the hippopotami, asking a passing zookeeper:

“Please, can you tell me if that hippopotamus is male or female”?

“I’m sorry to say that this information would be of interest only to another hippopotamus…”

As Jobs said in the keynote, now it’s all about “devices”. Desktops, laptops, iPads, iPhones – all are equal devices in the iCloud. Few people think of their iPad/iPhone as a computer; the innards are of interest only to those of us who have to develop software or hardware to (ahem) “mate” with those devices. Will the next iPhone or laptop use Apple’s A5 chip, or will there be an A6? My mom doesn’t care, and yours shouldn’t – unless she’s a fellow developer. Not even if she’s a stock analyst!

Next: software directions.

Re: WWDC2011

No comments

My flight to San Francisco and WWDC 2011 leaves tomorrow night and I should arrive early in the afternoon on June 2nd.

Friday and Saturday are mostly reserved for taking care of some private business, but if anybody wishes to meet with me before WWDC, please drop me an email (rainerbrockerhoff.net), Twitter direct message (I’m @rbrockerhoff) or AIM:rainerbrockerhoffmac.com.

Early Sunday I’ll switch hotels and soon after lunch I’ll be at Moscone to get my badge. There was supposed to be a meet-up of Brazilian developers in the early afternoon, but apparently not everybody will be able to show up; at any rate, I’ll hang out at registration for a couple of hours. Later on, and all week, things will be quite hectic and I’ve no idea yet which parties I’ll be able to attend.

After the conference I’ll have a couple of days for resting and reading, and I should be back home on June 15th.

Things are looking up development-wise. I’m well along implementing my ideas of transitioning most of my products to the Mac Apple Store. While this means that all the new stuff will only run on Mac OS X 10.6.6 and up, the old versions will continue to be available, though mostly unsupported. No time to post details yet, and some of those are bound to change depending on information gathered at WWDC – but I’m optimistic that everything will work out.

More as it happens!

WWDC2011

No comments

By a happy coincidence, I was notified in time that tickets for WWDC 2011 had become available, and booked one – they were sold out in less than 9 hours! Even though Brazil now has an online Apple Store, as well as Mac and iOS App Stores, developers still must send a (gasp!) Fax to Apple in California – no paying over the Internet, as we used to do before the stores came online. I’ve complained to Apple Developer Relations, hopefully this will be updated to current technology the next time around.

My air tickets should be confirmed in a few days, so far it looks like I’ll arrive in SFO on June 2nd in the afternoon. WWDC will start on with the keynote on Monday morning, June 6th (my birthday!) and end on Friday afternoon, June 9th. I’ll leave town on the morning of June 14th. As usual, the actual conference days and nights will be extremely busy, but I’ll probably have time for a meeting on other days.

On Sunday, June 5th, I’ll be getting my badge at Moscone betewen 14:00 and 15:00 (that’s 2 to 3 PM local time ;-)) and there’ll be a meet-up for Brazilian developers afterwards. Keep an eye on the blog for any changes and updates.

I’ve also found an interesting place to practice some table tennis: the San Francisco TTC at 953 DeHaro Street. Should be fun… I’ll take my trusty old racket and the new one I got courtesy of the nice folks at CocoaHeads Beijing two years ago.

Lion is coming

1 comment

With the new apartment mostly ready (though the external/common areas are still unfinished), a return to blogging and coding may now be possible.

While I’ve pondered about the general direction that I wish to take my software, details are still a little unclear. Yes, the Mac App Store does figure in my plans for updating/replacing XRay, but System Preference panels are not accepted – so Quay and Klicko will continue being distributed over this website.

The developer preview release of Mac OS X 10.7 “Lion” was a surprise to me. My expectation was that it would be released at (or just after) next June’s WWDC, which I hadn’t planned to attend. I’ve just installed the Lion preview and had a fast look; it’s farther along than such previews usually are, and there are sufficient new UI details and API changes that I decided to study those first, before commiting to design details on my own software.

A surprising amount of detail about Lion has already been published, NDAs notwithstanding, with only token sallies from Apple Legal. I’ve been somewhat out of touch with the developer community, so I can only speculate. Reducing the price of developer access to $99 – 20% of what it cost the last time around – may be a factor.

One aspect which will impact me immediately is that PowerPC support over Rosetta will no longer be available. There are some PowerPC apps I still use a lot: among them are Resorcerer, DMG Maker, Plain Clip, my own XRay, and – the one that’s open all the time – Eudora; as well as a bunch of utilities and games that I open very rarely. I suppose I’ll have to relegate them to my old Mac mini Core Solo, which – being a 32-bit machine – will also not be supported by Lion.

The exception is Eudora; I’ve used it since 1.0b5 or thereabouts. I suppose I’ll finally have to try out the Eudora OSE version; some fellow oldsters tell me it’s not too bad. None of the other email clients seem attractive, especially Apple’s Mail, which I actually tried out last year and didn’t like.

Photos licensed by Creative Commons license. Unless otherwise noted, content © 2002-2024 by Rainer Brockerhoff. Iravan child theme by Rainer Brockerhoff, based on Arjuna-X, a WordPress Theme by SRS Solutions. jQuery UI based on Aristo.