Solipsism Gradient

Rainer Brockerhoff’s blog

Browsing Posts in Hardware

When I began programming many years ago (now 55 and counting!), computing was in its infancy. We wrote programs on blue coding sheets, had them converted into decks of punchcards, and queued them on a shelf for “batch processing”. Usually the reward was a program listing with some obscure error messages like IEC107D, and I would mentally step through the program, repeating the process until being rewarded with a working “run”. I soon found employment at the local university, where more often than not I could run my program deck through myself and figure out things faster — and later on, in the wee hours, even use the huge mainframe computer as a primitive but enthralling personal device.

After some years the first video terminals came out, where you could view all lines of the program in glowing green rows, edit them directly, and enter the program into the batch queue. Was this the future? Well, there still was much to come. Very soon I realized I could buy one of those new Apple IIe computers along with a small TV and program/debug in the comfort of my home. And, miraculously, people actually paid me to sit at a computer keyboard and program.

And then the future arrived. When I saw the reports of the first Macintosh — a full graphics screen and a mouse — I knew that was it! The point-and-click user interface was the best. No more command lines. No more moving a clunky cursor around with the keyboard. It was heaven! And it only got better: color, larger screens, huge amounts of RAM and storage, networking! And then the internet came in, wonder of wonders.

It took years to evolve the expected standard behaviors of mouse gestures and UI conventions; application programmers had a library of items to use, so that pretty soon everything worked as expected and programs that flouted those conventions were downrated.

I was in the audience, in 2007, when Steve Jobs introed the first iPhone as a combination cellphone, iPod, and internet device. I wasn’t very impressed; I already had an iPod which I used little, had no plans of getting a cellphone (and indeed, held out until 2016 to buy one!), and the internet part was cool but the screen was small, the browser was limited and my laptop Mac had a much better feature set. OK, it wasn’t as portable but I always had it with me…

The real revolution here was the multitouch interface, an evolution of the point-and-click interface but with your fingers standing in for the mouse. It took years to evolve, and as with the Mac, Apple offered standard UI items to developers, which could then concentrate on functionality instead of reinventing the wheel.

But then in 2010 the first iPad came out and I bought one right away. Now here was a portable computer good enough to use as a personal device, and latter versions became more and more impressive. My iPad today is my main device for reading, listening to music and casual browsing, although I still fall back on the Mac for developing and writing longer texts. With my recent eye troubles I tend to fall back on my 77″ TV for watching movies, though, and programming has been curtailed.

Now the future has arrived again. I have no doubts that Apple’s Vision Pro — and, of course, “spatial computing” — is the Next Big New Thing.

Oddly enough, one of my arguments here is the sheer volume of vitriol about the device that one can find on the social networks: it’s unwieldy, it’s expensive, the external battery sucks, it’s been done already by others, it’s isolating and dystopian, it’s one more dragon in Apple’s evil ecosystem… Apple is doooomed, I tell you! Sound familiar? (And I’m not even on most of those networks!)

Hey, all those negatives were also rolled out in the past — and before we had Apple’s evil ecosystem, we had IBM’s evil ecosystem, remember? Every time such a futuristic device is sighted, it is clunky, expensive, power-hungry, and so forth. But also, if conditions are just right… the next version appears just in time, and it’s lighter, easier to use, and we can’t live without it anymore.

The real futuristic paradigm is now look-and-click, evolved from the old point-and-click way, and many other gestures are possible; standardisation is no doubt in progress. Why hold a mouse, or a control, or lift your hands unnecessarily, when you can convey all with small, subtle gestures? And our brains are evolved for a 3D environment — all our language and thinking is constructed around 3D metaphors.

Now, finally, we have a minimum viable system to explore 3D user interfaces. Discussing whether the Vision Pro is AR, VR or XR is besides the point; those are just implementation details and will evolve along with the UI.

Details on the hardware are scarce as I write this, and some may even be uninteresting — this thing is a self-contained computer on your face and the only relevant spec will probably be how much SSD space is left for user data. Everything else will be just good enough to be effectively transparent. And that’s the major point about the Vision Pro hardware: it’s a 3D input device for your brain, the first with sufficient quality to be transparent. Who needs holograms?

But, people will ask, what is the use case for this thing for the normal user/gamer/TV watcher/whatever? My answer: we don’t really know! Apple has, of course, selected some classic cases for the previous tech: FaceTime, games, 3D photos/videos, widescreen movies, Excel spreadsheets hanging in space (ugh), etc.; they had to do it, to get people’s attention. But between now and whenever this comes on the market in early 2024, developers will burn the midnight oil to build compelling use cases, most of which nobody (not even Apple!) had thought of before.

And this is where the Vision turns Pro. I believe that, rather than just designating a higher-capacity device compared to a non-Pro version, the Vision’s Pro name indicates that, at least until the 2nd or even 3rd generation comes out, this is a device for professionals. This is not (yet!) for casual gamers, zoomers or moviewatchers. Here’s a partial list I and a few friends came up with in a few minutes:

  • Architects, engineers, designers (as usual)
  • Doctors, dentists, psychologists, therapists and researchers in general
  • Educators and grad/postgrad students
  • Technicians in hitech fields like power generation, aircraft maintenance
  • Industrial applications are boundless
  • Astronomers, archaeologists, artists

And most of these can afford (or their companies can) the current $3499 price.

Com to think of it, I’ll probably buy two; it’ll still come in cheaper than paying for a huge 3D TV, multiple big screens/projectors and a couple of M2 Macs.

More as details emerge.

Hiatus: still there

1 comment

Two years have passed since my last post on the subject, and the few readers left may want to know what happened since then with my eye troubles.

There have some relatively minor complaints. The Diamox pills I was using to keep eye pressure down turned out to have noxious long-term side-effects and I had to discontinue them gradually. Some side-effects (dizziness, weakened voice and reduced coordination) I had attributed to lack of physical exercise and old age ? but they’re gone now, it’s a relief.

Fiddling around with eyedrops to find the least irritating ones is an ongoing process, complicated by two bouts with small corneal injuries — probably caused by forgetting to blink while using the computer.

The remaining problem is a macular edema that appeared over a year ago, causing blurred vision. After it proved resistant to eyedrops, a subtenonian triamcinolone injection relieved the condition for several months, but it seems to be coming back. It’s still undecided whether repeating the injection will be worthwhile.

In the meantime, the eye is still drier than usual, painful towards the end of the day, and the dilated pupil makes me avoid the sunlight; I’m still getting used to these — apparently permanent — conditions.

On the development front, I’ve decided to start writing software again in the near future; maybe not for publication, but purely as brain exercise. None of my old apps seem to be doable for the recent macOS versions; Apple has tightened restrictions on what can be done by third-party apps.

Also, I see that most of my Objective-C experience is now obsolete and Swift has progressed beyond my feeble CS skills. But it still might be fun to try to catch up! To that end, I’ve purchased a reasonably configured M1 Mac mini running macOS Monterey and have slowly been migrating my normal apps to that; tricky as I was still depending on some 32-bit utilities, so my trusty “Late 2014” iMac will still be in use in parallel for several months.

Things are more positive on the general health front. We’re both fully vaccinated with a booster shot, and have been able to start regular gym sessions again, with encouraging results. This has also been producing excellent results in our table tennis training, and we have even been twice to special training sessions in nearby Varginha. This weekend I’ll play the 10th TMB Challenge Plus tournament and have good chances to add a medal to my collection.

WWDC 2020 opens next June 22nd and all indications are that the highest-impact announcement will be the Mac’s migration from Intel to the ARM architecture.

While CPU architecture migrations are infrequent — they happen every decade or so — Apple has a good track record of pulling them off successfully.

The first major migration was the move from Motorola 68K to PowerPC chips around 1994, followed by moving from the Classic Mac System 9 to Mac OS X around 2000. Relevant here was that for some time Mac OS X ran older applications in the “Classic Environment”: a compatibility sandbox that emulated the APIs of System 9 and the instruction set of the 68K.

This worked reasonably well as PowerPC CPUs were several times faster than the old 68K ones. It also introduced the concept of “fat binaries“; the same application file contained code for both old and new environments.

A better historical precedent is the move from PowerPC to Intel processors in 2006. This was more traumatic for developers, as PowerPCs were “big-endian” and Intel CPUs were “little-endian”. This meant that, except for strings, values stored in memory, files or transmitted over networks had a different byte sequence ordering. To have the same program source code work on both systems you could no longer assume it would just work, but had to bracket your instructions with macros or function calls that would do nothing on one platform but swap bytes around on the other.

If you’re not an oldtimer like myself you probably never had to think about this — every Mac, iPhone, iPad, Apple Watch or Apple TV use little-endian values, and I even had to dig into documentation to make sure of it. ARM CPUs can be run in big-endian mode by setting a special bit at boot time but this is not the default, and no Apple device uses that mode.

Now, this meant that in 2006 developers could not just migrate their apps to Intel by recompiling; we had to look through every line to either check that it was endian-neutral, and if it wasn’t, those special macros had to be used. For people who had very CPU-specifically optimized code — perhaps even in (shudder) assembly language — separate code sections were necessary.

Having done all this, you recompiled your app twice; once for PowerPC, once for Intel; and the magic of fat binaries allowed you to ship it all in one app. Later on, some apps even needed 3 or 4 different code sections, depending not only on endianness but also on whether they would run on a 32- or 64-bit CPU!

Another — today mostly forgotten — aspect was that Apple prepared for the Intel migration by gradually modernizing and building their developer toolchain in-house. LLVM, Clang, LLDB etc. allowed Apple to ensure that, for whatever CPU they wanted to support, compilers were ready beforehand and could be optimized continuously later on, without depending on outsiders.

Still, in 2006 Apple had to ship special hardware, “Developer Transition Kits”, to select developers for testing. For software that couldn’t be converted to the new architecture, Apple introduced a limited compatibility box: Rosetta. If I recall correctly, it did on-the-fly translation of PowerPC code into Intel code, which was then cached. Because of its limitations it didn’t work for many larger applications and was soon phased out.

Moving in parallel to the PowerPC to Intel migration was a slower-motion shift in operating system APIs. Most notably, this involves Carbon and Cocoa.

Carbon was a C-based API introduced in 2000 to ease migration from Classic System 9 to Mac OS X. Cocoa, introduced around the same time, was an Objective-C based API for modern object-oriented programming, itself an evolution of NeXT’s OpenStep system. Underneath both APIs, in the now well-known layer model, was Core Foundation, which could be used from both types of apps; and some apps (like my own) could mix calls to both APIs with some care.

Not too long after the Intel migration, Apple announced that 64-bit was the future, and that Carbon would not be migrated to that environment. This process was stretched over several years and involved redefining what APIs were really considered “Carbon”; some, like the File Manager, were “de-carbonized” and lived on until macOS 10.5 (Catalina) came out.

Cocoa, on the other hand, continues to be used everywhere in macOS. The Finder, the Dock, Xcode, and Safari are all Cocoa apps. Even when Swift came out a few years ago most of it was built on top of Cocoa and Objective-C objects; the notable exception is the Swift toolchain itself.

So, after all this, here we’re looking at Yet Another Hardware Migration for Macs. Let’s look at the implications.

Economically, it makes sense for Apple, as many others have already commented. They’ll no longer be bound to a foreign evolution roadmap on which they have little influence. They have extensive experience in producing high-performance, low-power CPUs for their mobile devices, and the latest versions already outperform Intel in some situations.

Technically, it’s a huge win. Switching to ARM64 — and not just the standard ARMv8.x architecture licensed from ARM, but with their own, extensive modifications — will allow them to have unified GPUs, Neural Engines, memory controllers and so forth on all their line, with more uniform device drivers and low-level programming.

For 99% of developers, I think nothing will change. The new chips are little-endian also, so a simple recompile will have Xcode produce a fat binary for the new Macs which should run outright. Of course, if you have assembly language sections in your program and/or write kernel extensions/device drivers, time to learn a new architecture…

Snags will come for people who dislike, or can’t use, Xcode. Some have to use Intel’s compilers, for instance; I know too little about such cases to have an informed opinion, sorry.

Some pundits seem to expect a sudden concurrent change in macOS; something like Objective-C and/or Cocoa being obsoleted in favor of Swift and SwiftUI. Or even the Mac going away entirely, some sort of huge desktop iPad taking its place. In my view this won’t happen. For one, what would developers or even most Apple engineers use for development?

A big question is: will Apple be able to provide an Intel compatibility box on the ARM Macs? Certainly Boot Camp will not be available. Running a virtualizer like VMware Fusion or Parallels seems almost as difficult, unless the new CPUs have some sort of hardware assist to decode x86-64 instructions. This may not be as outlandish as it sounds; current Intel/AMD processors already break x86 CISC instructions into RISC micro-operations which are then cached and executed by the “inner” CPU. This is a gross oversimplification but in theory nothing — except silicon space — bars Apple from breaking x86 instructions into ARM instructions.

A Rosetta-like box seems more feasible for running individual Intel applications, but who needs that? Game users? Performance will be limited. Most virtualizer app users want the complete OS running and with native speed. Linux/BSD might be available soon; perhaps Windows for ARM.

But what about Catalyst, some of you may ask? Here I can only shrug. In its present form it certainly is not an important future technology for macOS. While simple apps can be done with it — perhaps purely for the benefit of developers unfamiliar with AppKit — can you envision a Catalyst Finder? SwiftUI is still very new and primitive, and will continue to be layered on top of AppKit/UIKit for some time. They may merge in the future, or be renamed gradually like Carbon was, but that’s a long time out.

Finally, hardware. I don’t think the existing A13 SoCs would be applicable to any Mac, though. Some version of the Mac mini would be the obvious candidate to be the first to get the all-new CPU. It would then percolate up through the laptop line and the iMac. In these cases, reduced power usage would be a bonus — even for the iMac, it would mean a smaller power supply, less heat and a thinner enclosure.

The Mac Pro should be the last Intel redoubt. Multiple CPUs, OEM graphic cards, generic PCIe cards — Apple will have to address a huge range of problems there and this will take years.

Enough handwaving for now; the usual disclaimers apply and I’m really looking forward to the keynotes next week.

Update:
— corrected date for the 68K->PowerPC migration. Thanks to Chris Adamson for catching the error.
— fixed some awkward language about virtualization. Thanks to Maurício Sadicoff.

Boom: the Return

No comments

A few years ago I wrote a series of posts about Apple’s then-new Lightning connector for iOS devices:

No doubt you’re noticing a trend there… 🙂

Anyway, the recently-released iPad Pro seems to have the much-awaited USB3 capability on its Lightning connector. It does ship with a Lightning-to-USB2 cable, though, and USB3 capability isn’t mentioned in the tech specs.

The main objection to this actually happening is that Lightning, with its 8 pins, doesn’t have enough pins to support the standard USB 3 specification. This is, again, the old assumption that Lightning cables are “just… wires leading from one end to the other”.

To restate what I posted previously, if you actually look at the USB3 pinout, there are the two differential pairs which Lightning already has, and one additional pair for USB2 compatibility. So a legacy wire-to-wire USB3 cable would need 9 pins — but, remember, Lightning connectors don’t work that way!

In other words, if you plug in an old Lightning-to-USB2 cable into an iOS device, the cable itself already has to convert the two differential pairs to USB2’s single pair. So, no need to have the extra legacy pair on the Lightning connector itself — a future Lightning-to-USB3 cable will generate that as well, and use the two high-speed pairs when plugged into a USB3 peripheral. The current pinout is, therefore, quite sufficient.

Watch Update

No comments

Tomorrow (we suppose) the Apple Watch will be out. For months, there’s been lots of interesting documentation on Apple’s site — but it’s all about WatchKit, the framework used on the iPhone side to run “Watch” apps. Almost nothing about the Watch itself. I think most of my previous speculations were confirmed: specifically, the part about the Watch mostly being a remote display for the iPhone:

Perhaps… just a sequence of drawing orders? The important part is that there’ll be a single process on the Watch for doing the UI, and all the application-specific parts can be offloaded to the iPhone.

So, for now, the application logic will all be on the iPhone side — where the actual WatchKit part runs — and “assets”, meaning storyboards, xib files, and PNGs with pre-rendered icons, buttons and so forth, are downloaded to the Watch and displayed as needed. My back-of-the-napkins calculations about battery life (around 15 hours) still seem valid: Tim Cook said that you’d have to charge the Watch every night. I also said:

Watch OS … will not be a stripped-down iOS; maybe even not a Darwin derivative. It will be a highly optimized embedded system that runs as few processes as possible. It will be very robust because it will be able to do only a fixed set of functions.

Of course, this clashes with everybody else’s assumption that of course the Watch will be running iOS. Apple continues to be very careful about this: the OS that actually runs on the Watch is named nowhere that I could find. Likewise no hardware specs beyond the two screen’s pixel sizes were revealed. Details about the OS may not be revealed until next year, when developer apps supposedly may run on the device itself. It might make make sense for Apple to repurpose, say, the OS running on the smaller no-app iPods.

Beyond speculations about functionality, rumors have concentrated on price and updatability. I’m not competent to speculate about prices, but John Gruber’s final thoughts on the issue seem very reasonable.

Opinions are split on updatability, since few of Apple’s products can be upgraded, and none can have their hardware updated to a next generation. Then again, here’s a completely new type of product, smaller and (in some versions) more expensive than any other; it’s also, perhaps, the most personal Apple product ever. If you get an expensive Watch, say, as a graduation present — with an engraving, perhaps — you’ll be very reluctant to dispose of it and get a new one in a few years, even if the new version does much more.

At absolute minimum, the battery will have to be replaceable, and in my opinion, the entire Watch module (probably including the battery, probably excluding the display) will be upgradeable for a fee once a better version comes out — maybe not forever, but for at least 2 or 3 generations. We’ll see.

Apple’s (pre-)announcement of the Apple Watch left the tech world in the usual disarray. Is it an expensive knock-off of Android watches (people tell me there is such a thing!)? Is it an attack on the high-end Swiss watch market? Is it an attack on the low-end Japanese watch market? Is it an even more transparent lock-in attempt on soi-disant “Apple fanbois”? I’d answer “no” to all those questions, but right now I’m more interested in the hardware and software technology of the watch.

Notice that the above link doesn’t mention iOS anywhere, but this other link has the magic word: WatchKit. Quote: “WatchKit Apps. Soon your favorite apps will feature controls and interactions unique to Apple Watch, enabling you to enjoy them in dynamic new ways.

Speculations about WatchKit since then usually have mentioned one or two assumptions:

  1. WatchKit will be written in/accessible only from Swift;
  2. WatchKit apps will run under iOS on the Apple Watch.

The first is, of course, wishful thinking from developers investing in the new Swift language. The second is, in my opinion, completely unwarranted and I’ll try to explain why.

This post is the most plausible so far: “WatchKit apps will ship as embedded binaries in iPhone apps, using the same basic principals [sic] as iOS 8 extensions. There will be some mechanism for the watch paired to an iPhone to detect and automatically install these ‘apps’ based on what is available on the paired iPhone. Delete the container app from the iPhone, it disappears from the watch. Xcode will have a template to add a WatchKit app to an iPhone app project.

Let’s back off WatchKit for a second and look at what we’ve seen of the hardware. The entire main board is shrunk down to a single unit: the S1. If you stop the middle introduction film at 4:46, you’ll see that it’s really a collection of chips and SMT components on an encapsulated multilayer board — not really a “single chip” as the narration says, but many large CPU “chips” nowadays are like that, too. Other than the S1, there’s of course the “Taptic Engine” assembly which does the wrist tapping, the crown sensor assembly, antennas and display, and the most important part: the battery.

Battery life is the make-or-break feature of the Apple Watch. iFixit’s disassembly of the Moto 360 watch shows why: there’s a square peg battery inside a round casing, rated at 320 mAh. Even though Motorola apparently build their own batteries, they don’t have enough volume to do a round one. Apple doesn’t have a volume problem and their casing is square, so they’re free to use all remaining volume for a longer-lasting battery.

The 320 mAh rating and the typical battery life of 12 hours of the Moto 360 means that the watch consumes, on the average, just under 27 mA. But they run Android on the watch, using an off-the-shelf TI ARM processor with attached RAM, flash memory, and so forth, so that figure is not surprising. In other words, it’s a stripped-down cellphone/MP3 player.

Suppose that Apple did its usual optimization of battery size, usage, etc., in a stripped-down iPod nano. It’s half the size of the nano, which has a 30-hour life, so we can assume half the battery, meaning 15 hours. OK, that would be marginally acceptable, perhaps.

But remember, the Apple Watch needs an iPhone nearby. In fact, many of the published functions, such as Siri, cellphone call response, GPS and so forth certainly use the iPhone’s hardware and software for that. Remember that one of the culprits of excessive battery usage is generic apps and processes running on the device. Remember that Apple, since the first iPod in 2001, has been very aggressive in optimizing their embedded systems. Remember that the first iPods and iPhones didn’t have any generic apps running on them, either. Remember that Apple already has technologies like Clang, OpenCL and Metal…

All that said, why run iOS and generic applications on the Watch at all? So here’s what I think likely about the real implementation.

  • Watch OS (or whatever it’s called — did they explicitly call it anything?) will not be a stripped-down iOS; maybe even not a Darwin derivative. It will be a highly optimized embedded system that has a few apps running in as few processes as possible. It will be very robust because it will be able to do only a fixed set of functions.
  • In other words, it will run only those things that may run while the paired iPhone is not available; we don’t know yet, but that might be just the timekeeping and pulse measuring apps. If the iPhone is there, the Watch will also work as a specialized I/O and display device for the apps installed there.
  • WatchKit will run on the paired iPhone inside a special server process; a matching iOS app will show installed Watch apps — probably those apps will be from the normal App Store, since they usually will have an iOS counterpart.
  • So, an installed Watch app will have at least some sort of preference app or pane on the iPhone; no use typing in passwords and such on the Watch, right? The part written in/for WatchKit will contain a server plugin that does the heavy lifting, data collating and communicating with the outside world, but it will also contain the application logic itself, commanding the Watch to do or display certain things.
  • I don’t mean to imply that the Watch will run a full WebKit client and the iPhone a web server, that might be overkill. Perhaps a useful subset of that, perhaps some variation of Display Postscript, some interpreted command language, or just a sequence of drawing orders? The important part is that there’ll be a single process on the Watch for doing the UI, and all the application-specific parts can be offloaded to the iPhone.

One consequence is that you can forget the idea of “jailbreaking” the Watch to connect to a non-iPhone, of course. Another one is that battery life might be at least a day, maybe even two or more. Nothing on Apple’s site so far contradicts any of my reasoning.

So, will WatchKit be accessible from Swift apps? Certainly. Will it itself be written in Swift? I doubt it for now. Maybe in iOS 9 some of the frameworks in iOS (and OS X) will have been rewritten, assuming that by then the Swift optimizer will be good enough. But that won’t be the case in a few months.

Possible but unlikely: WatchKit may have an API to download actual application code to the S1, which may (or may not) have an ARM-like architecture. Only in such a case — and since there will be no Cocoa/iOS frameworks on the Watch — I would expect the downloaded code to be in Swift (without optionals!), for extra safety; can’t have the Watch crashing and rebooting, right?

Update: Marcel Weiher kindly reminded me of CarPlay, which apparently works like that; nobody would say that cars are running iOS. On the other hand, in that case, the device is connected over USB (that is, reasonable bandwidth) and the car doesn’t have any battery life problems.

Comments welcome.

[continued from part III]

So, here I was back in Brazil with my brand-new Mac 128. Of course, the first thing I did was to disassemble it — a tradition I kept up for almost three decades, until Apple’s increasing use of glue and special tooling began to make it too risky for some Macs (especially laptops and the latest iMacs).

The hardware team at Quartzil was as interested in the machine as I was, and we learned a lot from it. Remember that this was for our upcoming QI900 8-bit microcomputer. At that time (mid-1984) PAL chips, injection-molding and four-layer boards were new and too expensive for all but very large-scale production runs, and we had to postpone adoption on all those. Similarly, when we looked at the Mac’s video circuits, we found that it used a horizontal flyback transformer that worked at higher frequencies than any commercially available in Brazil. That, and the fact that (because of the lack of PALs) we had to fall back to the MC6845 video controller chip, meant that we had to keep close to the 24×80 character display standard; the final display resolution was 27×90, with the first two lines reserved for a menu:


the menus were opened by the corresponding function keys, with shortcuts accessible by the special “QI” key. Notice the special “Edit” menu with “Undo”, “Copy”, “Paste” and “Delete” equivalents — sound familiar? 🙂

My Mac was used extensively for the QI900 design. All of the documentation was done in MacWrite/MacPaint (later, in WriteNow). I used a quite primitive C compiler (from Aztec) to write utility programs; one to optimize the MC6845 parameters to stay within certain constraints, another one to design the QI900 character set, which used an extended MacRoman encoding to allow accents and frame/window/menu-drawing characters. The “extended” part was also necessary because Apple’s original encoding didn’t include capital accented characters. The character set was then sent over one of the serial interfaces to an EPROM burner, and a copy was saved on the Mac itself as a FONT resource file. Unfortunately, all of these old files are still in my backups, but no longer readable — at a later time, they were encoded with DiskDoubler and, beyond that, were originally in long-obsolete file formats.

Subsequently I met other Mac users at a huge computer industry event in São Paulo; most important for my immediate future, the team from Unitron were there with their successful line of Apple II clones, and we talked about their plans for doing a Brazilian Mac clone. More about this (hopefully) in the next chapter.

My 2012 series of posts about Apple’s Lightning connector was (and still is!) the most-visited material here on the Solipsism Gradient: over 120 thousand visits so far, and counting. Most comments elsewhere about the posts have been positive.

Several of my surmises about the connector have since been confirmed; my main miss was that I supposed all 8 pins to be dynamically assignable. The actual pinout has not been officially released, but the Wikipedia article seems reasonably accurate there. Lower-cost 3rd-party Lightning cables and accessories have arrived and users seem to have quieted down with complaints about the connector.

Last month my new iPad Air arrived and now I finally am in a position to comment on the actual user experience of the Lightning connector.

Build quality of the Apple cables and adapters is excellent – I bought an extra USB cable as well as the SD, VGA and HDMI adapters. I’ve never had one of the old 30-pin cables or adapters fail (one of them is 10 years old!) and the new ones look to be even more robust.

Inserting or removing  the connector gives strong positive feedback – there is a distinct “click” and it needs more force than required by the old connector. In fact, I had to get used to not simply pulling the iPad off; some hilarity ensued when I didn’t notice it was plugged in and attempted to walk away.

All in all, I can now confidently say that Lightning is a Good Thing™. 🙂

Update: Yet Another Follow-Up — this time about Lightning and USB3.

Photos licensed by Creative Commons license. Unless otherwise noted, content © 2002-2024 by Rainer Brockerhoff. Iravan child theme by Rainer Brockerhoff, based on Arjuna-X, a WordPress Theme by SRS Solutions. jQuery UI based on Aristo.