Solipsism Gradient

Rainer Brockerhoff’s blog

Browsing Posts tagged iPhone

When I began programming many years ago (now 55 and counting!), computing was in its infancy. We wrote programs on blue coding sheets, had them converted into decks of punchcards, and queued them on a shelf for “batch processing”. Usually the reward was a program listing with some obscure error messages like IEC107D, and I would mentally step through the program, repeating the process until being rewarded with a working “run”. I soon found employment at the local university, where more often than not I could run my program deck through myself and figure out things faster — and later on, in the wee hours, even use the huge mainframe computer as a primitive but enthralling personal device.

After some years the first video terminals came out, where you could view all lines of the program in glowing green rows, edit them directly, and enter the program into the batch queue. Was this the future? Well, there still was much to come. Very soon I realized I could buy one of those new Apple IIe computers along with a small TV and program/debug in the comfort of my home. And, miraculously, people actually paid me to sit at a computer keyboard and program.

And then the future arrived. When I saw the reports of the first Macintosh — a full graphics screen and a mouse — I knew that was it! The point-and-click user interface was the best. No more command lines. No more moving a clunky cursor around with the keyboard. It was heaven! And it only got better: color, larger screens, huge amounts of RAM and storage, networking! And then the internet came in, wonder of wonders.

It took years to evolve the expected standard behaviors of mouse gestures and UI conventions; application programmers had a library of items to use, so that pretty soon everything worked as expected and programs that flouted those conventions were downrated.

I was in the audience, in 2007, when Steve Jobs introed the first iPhone as a combination cellphone, iPod, and internet device. I wasn’t very impressed; I already had an iPod which I used little, had no plans of getting a cellphone (and indeed, held out until 2016 to buy one!), and the internet part was cool but the screen was small, the browser was limited and my laptop Mac had a much better feature set. OK, it wasn’t as portable but I always had it with me…

The real revolution here was the multitouch interface, an evolution of the point-and-click interface but with your fingers standing in for the mouse. It took years to evolve, and as with the Mac, Apple offered standard UI items to developers, which could then concentrate on functionality instead of reinventing the wheel.

But then in 2010 the first iPad came out and I bought one right away. Now here was a portable computer good enough to use as a personal device, and latter versions became more and more impressive. My iPad today is my main device for reading, listening to music and casual browsing, although I still fall back on the Mac for developing and writing longer texts. With my recent eye troubles I tend to fall back on my 77″ TV for watching movies, though, and programming has been curtailed.

Now the future has arrived again. I have no doubts that Apple’s Vision Pro — and, of course, “spatial computing” — is the Next Big New Thing.

Oddly enough, one of my arguments here is the sheer volume of vitriol about the device that one can find on the social networks: it’s unwieldy, it’s expensive, the external battery sucks, it’s been done already by others, it’s isolating and dystopian, it’s one more dragon in Apple’s evil ecosystem… Apple is doooomed, I tell you! Sound familiar? (And I’m not even on most of those networks!)

Hey, all those negatives were also rolled out in the past — and before we had Apple’s evil ecosystem, we had IBM’s evil ecosystem, remember? Every time such a futuristic device is sighted, it is clunky, expensive, power-hungry, and so forth. But also, if conditions are just right… the next version appears just in time, and it’s lighter, easier to use, and we can’t live without it anymore.

The real futuristic paradigm is now look-and-click, evolved from the old point-and-click way, and many other gestures are possible; standardisation is no doubt in progress. Why hold a mouse, or a control, or lift your hands unnecessarily, when you can convey all with small, subtle gestures? And our brains are evolved for a 3D environment — all our language and thinking is constructed around 3D metaphors.

Now, finally, we have a minimum viable system to explore 3D user interfaces. Discussing whether the Vision Pro is AR, VR or XR is besides the point; those are just implementation details and will evolve along with the UI.

Details on the hardware are scarce as I write this, and some may even be uninteresting — this thing is a self-contained computer on your face and the only relevant spec will probably be how much SSD space is left for user data. Everything else will be just good enough to be effectively transparent. And that’s the major point about the Vision Pro hardware: it’s a 3D input device for your brain, the first with sufficient quality to be transparent. Who needs holograms?

But, people will ask, what is the use case for this thing for the normal user/gamer/TV watcher/whatever? My answer: we don’t really know! Apple has, of course, selected some classic cases for the previous tech: FaceTime, games, 3D photos/videos, widescreen movies, Excel spreadsheets hanging in space (ugh), etc.; they had to do it, to get people’s attention. But between now and whenever this comes on the market in early 2024, developers will burn the midnight oil to build compelling use cases, most of which nobody (not even Apple!) had thought of before.

And this is where the Vision turns Pro. I believe that, rather than just designating a higher-capacity device compared to a non-Pro version, the Vision’s Pro name indicates that, at least until the 2nd or even 3rd generation comes out, this is a device for professionals. This is not (yet!) for casual gamers, zoomers or moviewatchers. Here’s a partial list I and a few friends came up with in a few minutes:

  • Architects, engineers, designers (as usual)
  • Doctors, dentists, psychologists, therapists and researchers in general
  • Educators and grad/postgrad students
  • Technicians in hitech fields like power generation, aircraft maintenance
  • Industrial applications are boundless
  • Astronomers, archaeologists, artists

And most of these can afford (or their companies can) the current $3499 price.

Com to think of it, I’ll probably buy two; it’ll still come in cheaper than paying for a huge 3D TV, multiple big screens/projectors and a couple of M2 Macs.

More as details emerge.

Boom: the Return

No comments

A few years ago I wrote a series of posts about Apple’s then-new Lightning connector for iOS devices:

No doubt you’re noticing a trend there… 🙂

Anyway, the recently-released iPad Pro seems to have the much-awaited USB3 capability on its Lightning connector. It does ship with a Lightning-to-USB2 cable, though, and USB3 capability isn’t mentioned in the tech specs.

The main objection to this actually happening is that Lightning, with its 8 pins, doesn’t have enough pins to support the standard USB 3 specification. This is, again, the old assumption that Lightning cables are “just… wires leading from one end to the other”.

To restate what I posted previously, if you actually look at the USB3 pinout, there are the two differential pairs which Lightning already has, and one additional pair for USB2 compatibility. So a legacy wire-to-wire USB3 cable would need 9 pins — but, remember, Lightning connectors don’t work that way!

In other words, if you plug in an old Lightning-to-USB2 cable into an iOS device, the cable itself already has to convert the two differential pairs to USB2’s single pair. So, no need to have the extra legacy pair on the Lightning connector itself — a future Lightning-to-USB3 cable will generate that as well, and use the two high-speed pairs when plugged into a USB3 peripheral. The current pinout is, therefore, quite sufficient.

Apple’s (pre-)announcement of the Apple Watch left the tech world in the usual disarray. Is it an expensive knock-off of Android watches (people tell me there is such a thing!)? Is it an attack on the high-end Swiss watch market? Is it an attack on the low-end Japanese watch market? Is it an even more transparent lock-in attempt on soi-disant “Apple fanbois”? I’d answer “no” to all those questions, but right now I’m more interested in the hardware and software technology of the watch.

Notice that the above link doesn’t mention iOS anywhere, but this other link has the magic word: WatchKit. Quote: “WatchKit Apps. Soon your favorite apps will feature controls and interactions unique to Apple Watch, enabling you to enjoy them in dynamic new ways.

Speculations about WatchKit since then usually have mentioned one or two assumptions:

  1. WatchKit will be written in/accessible only from Swift;
  2. WatchKit apps will run under iOS on the Apple Watch.

The first is, of course, wishful thinking from developers investing in the new Swift language. The second is, in my opinion, completely unwarranted and I’ll try to explain why.

This post is the most plausible so far: “WatchKit apps will ship as embedded binaries in iPhone apps, using the same basic principals [sic] as iOS 8 extensions. There will be some mechanism for the watch paired to an iPhone to detect and automatically install these ‘apps’ based on what is available on the paired iPhone. Delete the container app from the iPhone, it disappears from the watch. Xcode will have a template to add a WatchKit app to an iPhone app project.

Let’s back off WatchKit for a second and look at what we’ve seen of the hardware. The entire main board is shrunk down to a single unit: the S1. If you stop the middle introduction film at 4:46, you’ll see that it’s really a collection of chips and SMT components on an encapsulated multilayer board — not really a “single chip” as the narration says, but many large CPU “chips” nowadays are like that, too. Other than the S1, there’s of course the “Taptic Engine” assembly which does the wrist tapping, the crown sensor assembly, antennas and display, and the most important part: the battery.

Battery life is the make-or-break feature of the Apple Watch. iFixit’s disassembly of the Moto 360 watch shows why: there’s a square peg battery inside a round casing, rated at 320 mAh. Even though Motorola apparently build their own batteries, they don’t have enough volume to do a round one. Apple doesn’t have a volume problem and their casing is square, so they’re free to use all remaining volume for a longer-lasting battery.

The 320 mAh rating and the typical battery life of 12 hours of the Moto 360 means that the watch consumes, on the average, just under 27 mA. But they run Android on the watch, using an off-the-shelf TI ARM processor with attached RAM, flash memory, and so forth, so that figure is not surprising. In other words, it’s a stripped-down cellphone/MP3 player.

Suppose that Apple did its usual optimization of battery size, usage, etc., in a stripped-down iPod nano. It’s half the size of the nano, which has a 30-hour life, so we can assume half the battery, meaning 15 hours. OK, that would be marginally acceptable, perhaps.

But remember, the Apple Watch needs an iPhone nearby. In fact, many of the published functions, such as Siri, cellphone call response, GPS and so forth certainly use the iPhone’s hardware and software for that. Remember that one of the culprits of excessive battery usage is generic apps and processes running on the device. Remember that Apple, since the first iPod in 2001, has been very aggressive in optimizing their embedded systems. Remember that the first iPods and iPhones didn’t have any generic apps running on them, either. Remember that Apple already has technologies like Clang, OpenCL and Metal…

All that said, why run iOS and generic applications on the Watch at all? So here’s what I think likely about the real implementation.

  • Watch OS (or whatever it’s called — did they explicitly call it anything?) will not be a stripped-down iOS; maybe even not a Darwin derivative. It will be a highly optimized embedded system that has a few apps running in as few processes as possible. It will be very robust because it will be able to do only a fixed set of functions.
  • In other words, it will run only those things that may run while the paired iPhone is not available; we don’t know yet, but that might be just the timekeeping and pulse measuring apps. If the iPhone is there, the Watch will also work as a specialized I/O and display device for the apps installed there.
  • WatchKit will run on the paired iPhone inside a special server process; a matching iOS app will show installed Watch apps — probably those apps will be from the normal App Store, since they usually will have an iOS counterpart.
  • So, an installed Watch app will have at least some sort of preference app or pane on the iPhone; no use typing in passwords and such on the Watch, right? The part written in/for WatchKit will contain a server plugin that does the heavy lifting, data collating and communicating with the outside world, but it will also contain the application logic itself, commanding the Watch to do or display certain things.
  • I don’t mean to imply that the Watch will run a full WebKit client and the iPhone a web server, that might be overkill. Perhaps a useful subset of that, perhaps some variation of Display Postscript, some interpreted command language, or just a sequence of drawing orders? The important part is that there’ll be a single process on the Watch for doing the UI, and all the application-specific parts can be offloaded to the iPhone.

One consequence is that you can forget the idea of “jailbreaking” the Watch to connect to a non-iPhone, of course. Another one is that battery life might be at least a day, maybe even two or more. Nothing on Apple’s site so far contradicts any of my reasoning.

So, will WatchKit be accessible from Swift apps? Certainly. Will it itself be written in Swift? I doubt it for now. Maybe in iOS 9 some of the frameworks in iOS (and OS X) will have been rewritten, assuming that by then the Swift optimizer will be good enough. But that won’t be the case in a few months.

Possible but unlikely: WatchKit may have an API to download actual application code to the S1, which may (or may not) have an ARM-like architecture. Only in such a case — and since there will be no Cocoa/iOS frameworks on the Watch — I would expect the downloaded code to be in Swift (without optionals!), for extra safety; can’t have the Watch crashing and rebooting, right?

Update: Marcel Weiher kindly reminded me of CarPlay, which apparently works like that; nobody would say that cars are running iOS. On the other hand, in that case, the device is connected over USB (that is, reasonable bandwidth) and the car doesn’t have any battery life problems.

Comments welcome.

My 2012 series of posts about Apple’s Lightning connector was (and still is!) the most-visited material here on the Solipsism Gradient: over 120 thousand visits so far, and counting. Most comments elsewhere about the posts have been positive.

Several of my surmises about the connector have since been confirmed; my main miss was that I supposed all 8 pins to be dynamically assignable. The actual pinout has not been officially released, but the Wikipedia article seems reasonably accurate there. Lower-cost 3rd-party Lightning cables and accessories have arrived and users seem to have quieted down with complaints about the connector.

Last month my new iPad Air arrived and now I finally am in a position to comment on the actual user experience of the Lightning connector.

Build quality of the Apple cables and adapters is excellent – I bought an extra USB cable as well as the SD, VGA and HDMI adapters. I’ve never had one of the old 30-pin cables or adapters fail (one of them is 10 years old!) and the new ones look to be even more robust.

Inserting or removing  the connector gives strong positive feedback – there is a distinct “click” and it needs more force than required by the old connector. In fact, I had to get used to not simply pulling the iPad off; some hilarity ensued when I didn’t notice it was plugged in and attempted to walk away.

All in all, I can now confidently say that Lightning is a Good Thing™. 🙂

Update: Yet Another Follow-Up — this time about Lightning and USB3.

Boom: Pins

46 comments

This post has been updated several times (last update was on Feb.8, 2013); be sure to scroll to the end. Also see my final follow-up in 2014.

One central feature of any connector/plug is the pincount. The ubiquitous AC plugs we all know from an early age have 2 (or, more usually, 3) easily visible pins and of course the AC outlet is supposed to have the same number – and, intuitively, we know that the cable itself has the same number of wires. Depending on where you live, you may also be intimately familiar with adapters or conversion cables that have one type of plug on one end and a different type on the other. Here’s one AC adapter we’ve become used to here in Brazil, after the recent (and disastrous) change to the standard:

Even with such a simple adapter – if you open it, there’s just three metal strips connecting one side to the other – mistakes can be made. This specific brand’s design is faulty, assuming that the two AC pins are interchangeable. This is true for 220V, but in an area where 110V is used, neutral and hot pins will be reversed, which can be dangerous if you plug an older 3-pin appliance into such an adapter.

Still, my point here is that everybody is used to cables and adapters that are simple, inexpensive, and consist just of wires leading from one end to the other – after all, this is true for USB, Ethernet, FireWire, and so forth. Even things like DVI-VGA adapters seem to follow this pattern. But things have been getting more complicated lately. Even HDMI cables, which have no active components anywhere, transmit data at such speeds that careful shielding is necessary, and cable prices have stayed relatively high; if you get a cheap cable, you may find out that it doesn’t work well (or at all).

The recent Thunderbolt cables show the new trend. Thunderbolt has two full-duplex 10Gbps data paths and a low-speed control path. This means that you need two high-speed driver chips on each end of the cable (one next to the connector, one in the plug). This means that these cables sell in the $50 price range, and it will take a long time for prices to drop even slightly.

DisplayPort is an interesting case; it has 1-4 data paths that can run at 1.3 to 4.3Gbps, and a control path. The original connector had limited adoption and when Apple came out with their smaller mini version, it was quickly incorporated into the standard, and also reused for Thunderbolt. An even smaller version, called MyDP, is due soon. Analogix recently came out with an implementation of MyDP which they call SlimPort. MyDP is intended for mobile devices and squeezes one of the high-speed paths and the control paths down to 5 pins, allowing it to use a 5-pin micro-USB connector. Here’s a diagram of the architecture on the device side:

If you read the documentation carefully, right inside the micro-USB plug you need a special converter chip which converts those 3 signals to HDMI, and from then on, up to the other end of the cable, you have shielded HDMI wire pairs and a HDMI connector. Of course, this means that you can’t judge that cable by the 5 pins on one end, nor can you say that that specific implementation “transmits audio/video over USB”. It just repurposes the connector. Such a cable would, of course, be significantly more expensive to manufacture than the usual “wires all the way down” cable, and (because of the chip) even more than a standard HDMI cable.

Still referring to the diagram above, if you substitute the blue box (DisplayPort Transmitter) for another labeled “MHL Transmitter”, you have the MHL architecture, although some implementations use an 11-pin connector. Common to both MHL and MyDP is the need for an additional transmitter (driver) chip as well as a switcher chip that goes back and forth between that and the USB transceivers. This, of course, implies additional space on the device board for these chips, traces and passive components, as well as increased power consumption. You can, of course, put in a micro-HDMI connector and drive that directly, that would save neither space nor power.

Is there another way to transmit audio/video over a standard USB implementation? There are device classes for that, but they’re mostly capable of low-bandwidth applications like webcams; at least for USB2. Ah, but what of USB3? That has serious bandwidth (5Gbps) that certainly can accommodate large-screen, quality video, as well as general high-speed data transfer – not up to Thunderbolt speeds, though. You need a USB3 transceiver chip in the blue box above, and no switcher chip; USB3 already has a dedicated pin pair for legacy USB2 compatibility. All that’s needed is the necessary bandwidth on the device itself; and here’s where things start to get complicated again.

You see, there’s serious optimization already going on between the processor and display controller – in fact, all that is on a single chip, the SoC (System-on-a-Chip), labelled A6 in the iFixit teardown. Generating video signals in some standard mode and pulling it out of the SoC needs only a few added pins. If you go the extra trouble to also incorporate a USB3 driver on the SoC and a fast buffer RAM to handle burst transfers of data packets, the SoC can certainly implement the USB3 protocols. But – and that’s the problem – unlike video, that data doesn’t come at predictable times from predictable places. USB requires software to handle the various protocol layers, and between that and the necessity to, at some point, read or write that data to and from Flash memory, you run into speed limits which make it unlikely that full USB3 speeds can be handled by current implementations.

But, even so, let’s assume, for the sake of argument, that the A6 does implement all this and that both it and the Flash memory can manage USB3 speeds. Will, then, a Lightning-to-USB3 cable come out soon? Is that even possible? (You probably were wondering when I would get around to mentioning Lightning…)

Here’s where the old “wires-all-the-way-down” reflexes kick in, at least if you’re not a hardware engineer. To quote from that link:

Although it’s clear at this point that the iPhone 5 only sports USB 2.0 speeds, initial discussions of Lightning’s support of USB 3.0 have focused on its pin count—the USB “Super Speed” 3.0 spec requires nine pins to function, and Lightning connectors only have eight.

…The Lightning connector itself has two divots on either side for retention, but these extra electrical connections in the receptacle could possibly be used as a ground return, which would bring the number of Lightning pins to the same count as that of USB 3.0—nine total.

(…followed, in the comments, by discussions of shields and ground returns and…)

Of course, that contains the following failed assumptions (beyond what I just mentioned):

  1. Lightning is just a USB3 interface in disguise, and
  2. Cables and connectors are always wired straight-through, at most with a shield around the cable.
  3. If there are any chips in the connector, they must be sinister authentication chips!

These assumptions also underlie the oft-cited intention of “waiting for the $1 cables/adapters”. But, recall that Apple specifically said that Lightning is an all-digital, adaptive interface. USB3 is not adaptive, although it can be called digital in that it has two digital signal paths implemented as differential pairs. If you abandon assumptions 1 and 2, assumption 3 becomes just silly. Remember, the SlimPort designers put a few simple digital signals on the connector and converted them – just a cm or so away – into another standards’ differential wire pairs by putting a chip inside the plug.

So, summing up all I said here and in my previous posts:

  • Lightning is adaptive.
  • All 8 pins are used for signals, and all or most can be switched to be used for power. So it makes no sense to say “Lightning is USB2-only” or whatever. (But see update#5, below.)
  • The outer plug shell is used as ground reference and connected to the device shell.
  • At least one (probably at most two) of the pins is used for detecting what sort of plug is plugged in.
  • All plugs have to contain a controller/driver chip to implement the “adaptive” thing.
  • The device watches for a momentary short on all pins (by the leading edge of the plug) to detect plug insertion/removal. (This has apparently been disproved by some cheap third-party plugs that don’t have a metal leading edge.)
  • The pins on the plug are deactivated until after the plug is fully inserted, when a wake-up signal on one of the pins cues the chip inside the plug. This avoids any shorting hazard while the plug isn’t inside the connector.
  • The controller/driver chip tells the device what type it is, and for cases like the Lightning-to-USB cable whether a charger (that sends power) or a device (that needs power) is on the other end.
  • The device can then switch the other pins between the SoC’s data lines or the power circuitry, as needed in each case.
  • Once everything is properly set up, the controller/driver chip gets digital signals from the SoC and converts them – via serial/parallel, ADC/DAC, differential drivers or whatever – to whatever is needed by the interface on the other end of the adapter or cable. It could even re-encode these signals to some other format to use fewer wires, gain noise-immunity or whatever, and re-decode them on the other end; it’s all flexible. It could even convert to optical.

I’ll be seriously surprised if even one of those points is not verified when the specs come out. And this is what is meant by “future-proof”. Re-using USB and micro-USB (or any existing standard) could never do any of that.

Update: just saw this article which purports to show the pinouts of the current Lightning-to-USB2 cable. “…dynamically assigns pins to allow for reversible use” is of course obvious, if you put together the “adaptive” and “reversible” points from this picture of the iPhone 5 event. Regarding the pinout they published, it’s not radially symmetrical as I thought it would be (except for one two pins), so  I really would like a confirmation from some site like iFixit (I hear they’ll do a teardown soon). They also say:

Dynamic pin assignment performed by the iPhone 5 could also help explain the inclusion of authentication chips within Lighting cables. The chip is located between the V+ contact of the USB and the power pin of the Lightning plug.

I really see no justification for the “authentication chip” hypothesis, and even their diagram doesn’t show any single “power pin of the Lightning plug”. It’s clear that, once the cable’s type has been negotiated with the device, and the device has checked if there’s a charger, a peripheral or a computer on the other end, the power input from the USB side is switched to however many pins are required to carry the available current.

Update#2: I was alerted to this post, which states:

The iPhone 5 switches on by itself, even when the USB end [of the Lightning-to-USB cable] is not plugged in.

Hm. This would lend weight to my statement that a configuration protocol between device and Lightning plug runs just after plug-in – after all, such a protocol wouldn’t work with the device powered off. It also means that the protocol is implemented in software on the device side; otherwise they could just run it silently, until it really appears that the entire device needs to power up.

Still, there’s the question of what happens when the device battery is entirely discharged. I suppose there’s some sort of fallback circuit that allows the device to be powered up from the charger in that case.

Finally, I’ve just visited an Apple Store where I could get my first look at an iPhone 5. The plug is really very tiny but looks solid.

Update#3: yet another article reviving the authentication chip rumor. Recall how a similar flap about authentication chips in Apple’s headphone cables was finally put to rest? It’s the same thing; the chip in headphones simply implemented Apple’s signalling protocol to control iPods from the headphone cable controls. The chip in the Lightning connector simply implements Apple’s connector recognition protocol and switches charging/supply current.

Apple is building these chips in quantity for their own use and will probably make them available to qualified MFi program participants at cost – after all, it’s in their interest to make accessories widely available, not “restrain availability”.

Now, we hear that “only Apple-approved manufacturing facilities will be allowed to produce Lightning connector accessories”. That makes sense in that manufacturing tolerances on the new connector seem to be very tight and critical. Apple certainly wouldn’t want cheap knock-offs of the connector causing shorts, seating loosely or implementing the recognition protocol in a wrong way; this would reflect badly on the devices themselves, just as with apps. Think of this as the App Store for accessory manufacturers. 🙂

Update#4: new articles have come out with more information, confirming my reasoning.

The folks at Chipworks has done a more professional teardown, revealing that the connector contains, as expected, a couple of power-switching/regulating chips, as well as a previously unknown TI BQ2025 chip, which appears to contain a small amount of EPROM and implements some additional logic, power-switching, and TI’s SDQ serial signalling interface. SDQ also uses CRC checking on the message packets, so a CRC generator would be on the chip. Somewhat confusingly, Chipworks refer to CRC as a “security feature”, perhaps trying to tie into the authentication angle, but of course any serial protocol has some sort of CRC checking just to discard packets corrupted by noise.

Anandtech has additional information:

Apple calls Lightning an “adaptive” interface, and what this really means are different connectors with different chips inside for negotiating appropriate I/O from the host device. The way this works is that Apple sells MFi members Lightning connectors which they build into their device, and at present those come in 4 different signaling configurations with 2 physical models. There’s USB Host, USB Device, Serial, and charging only modes, and both a cable and dock variant with a metal support bracket, for a grand total of 8 different Lightning connector SKUs to choose from.

…Thus, the connector chip inside isn’t so much an “authenticator” but rather a negotiation aide to signal what is required from the host device.

Finally, there’s the iFixit iPod Nano 7th-gen teardown. What’s important here is that this is the thinnest device so far that uses Lightning, and it’s just 5.4mm (0.21″) thick. From the pictures you can see that devices can’t get much thinner without the connector thickness becoming the limiting factor.

Update#5: the Wikipedia article now shows a supposedly definite pin-out (and the iFixit iPhone 5 teardown links to that). Although I can’t find an independent source for the pin-out, it shows two identification pins, two differential data lanes, and a fixed power pin. Should this be confirmed it would mean that the connector is less adaptive in regarding to switching data and power pins; on the other hand, that pinout may well be just an indication of the default configuration for USB-type cables (that is, after the chips have negotiated the connection).

Boom: Teardown

1 comment

This is yet another follow-up to my posts about Apple’s new Lightning mobile connector.

The cool folks at iFixit have now published their comprehensive teardown of the iPhone 5. (Hopefully the other 2 new devices will also be done soon.)

Here’s a detail view of the Lightning connector inside the case: (click on all images to see the hires version from their site)

Notice two screws securing the connector body to the device case, and the metal bracket that keeps the other end from flexing. Here’s a closeup of the disassembled connector and of the plug:

Remember that the inside of the case is milled out of a solid block of metal, so this design looks to be much less breakable than the old 30pin version – I’ve been told that the tab end of the plug also feels very sturdy. Here’s a close-up of both connectors:

The space savings are considerable. I read that Apple has no plans to do a dock, so this looks to be a third-party opportunity. The previous connector had no serious protection against flexing, so previous docks had to grip the back and bottom of the device, which also led to a profusion of plastic dock adapters; Lightning docks should be able to get away with just a simple generic back support.

Confirming the rest of my speculations regarding the “adaptive” part of the Lightning interface will have to wait until the specifications leak… stay tuned.

Update: just saw a report about a teardown of the Lightning plug: “Peter from Double Helix Cables took apart the Lightning connector and found inside what appear to be authentication chips. He found a chip located between the V+ contact of the USB and the power pin on the new Lightning plug.”

It’s interesting how people assume that Lightning is just a pin-compatible extension of USB (which also explains why they feel that a cable should cost only a dollar or two). Note also that nobody knows for sure yet which pin is “the power pin”. Unfortunately the picture is very unclear ; it seems that there are three chips and a few passive components, at least on that side. Which, of course, goes far to confirm my hypothesis that the cable contains circuitry to tell the device which sort of adapter/cable is connected, as well as signal drivers/conditioners for USB2 (for this specific cable). The large chip, if it is really “directly in the signal path of the V+ wire”, probably switches the charging current to the appropriate pins on the device, once the interface has adapted to the cable – all this “authentication chip” paranoia is just – paranoia.

Update#2: it says here: “Included in the new high-tech part is a unique design which the analyst says is likely to feature a pin-out with four contacts dedicated to data, two for accessories, one for power and a ground. Two of the data transmission pins may be reserved for future input/output technology like USB 3.0 or perhaps even Thunderbolt, though this is merely speculation.”

See what I mean about people thinking that all pins are equal? What do they think that “adaptive” means, anyway?

Update#3: John Siracusa and Dan Benjamin agreed with my points in their latest talk show (references start just after the 49:00 mark) and they even sorta pronounced my name right. Thanks guys! 😉

Update#4: found a good discussion of the Lightning (published over a month ago!), with somewhat blurry pictures of a disassembled Lightning plug. They seem to match well with the linked pictures in the first update, above.

Update#5: my final summary. Please comment there, comments here are now closed.

Boom: A Follow-Up

15 comments

This is a follow-up to my post about Apple’s new Lightning mobile connector. Thanks to all who linked or commented.

Apple has since published mechanical drawings of the iPhone 5, iPod nano 7th gen, and iPod touch 5th gen. The nano’s drawing has the best illustration of the connector side:

Applying the known width of 10.02mm to the connector photos, it would seem that the projecting part of the plug is about 1.6mm thick and 6.9mm wide. It so happens that this matches closely the standard printed circuit board thickness of 63 mils (1.57mm). The conclusion is clear: the plug consists of a PCB with 8 contacts on each side in a protective metal cladding. The PCB inside the plug’s body will have components on each side; remember the Thunderbolt cable teardown?

The actual interface specifications are still not available to the public – in fact, I think the old 30-pin connector specs never were officially available. What information we have is reverse-engineered or leaked from companies that are in Apple’s MFi program. But connector trends are clear. Some of you may remember the old SCSI 50-pin and Centronics 36-pin connectors; both interface were comparatively low-speed parallel and were substituted by higher-speed USB and FireWire serial interfaces. Thunderbolt, for instance, despite its relatively numerous 20 pins, is a serial interface. It has two full-duplex, differential, 10Gbps data paths (meaning 2 pin for each of 4 transmit/receive pair) as well as two low-speed transmit/receive pins, a connector sensing pin, a power pin, and several ground/shield pins. This is still overkill for the new generation of mobile devices, hence Lightning, which has only 8 pins (plus the outer ground/shield) and, no doubt, lower speeds.

So what are the technical reasons to go with Yet Another Proprietary Connector? Apple has, of course, gone down that road so often that their engineers are well-aware of the tradeoffs involved. We’ve seen the same happen a year or two ago: instead of going to USB3 or eSATA for external devices, they developed Thunderbolt (together with Intel), which is both faster and much more flexible.

Many people complain that Apple should have chosen micro-USB instead of a proprietary connector. The micro-B connector is almost exactly the same size as the Lightning, though it has only 5 pins and is not reversible. The USB3 version of micro-B has 5 extra pins in a separate section and twice the width of the USB2 version. USB3 has actually two different pin groups: one implements the USB2 standard while 2 extra signal pairs handle the new specification. As I mentioned before, usually the USB2 version has limited current carrying capability which would make it unsuitable for the upcoming Lightning iPad; the USB3 version has enough capability but is larger. Non-standard variations are already on the market, and yes, some connector manufacturers are now specifying higher current capacity on the two power pins. True, there are standards being implemented that expand these limits even more, but they’re not quite on the market yet.

Designing an interface means balancing many technical and design trade-offs, especially for a mobile device where device volume, battery capacity, thermal limitations, available PCB area, charging parameters and statistics of intended use all impose serious constraints. Let’s for the moment posit Apple had gone the micro-USB3 route. This would mean a dedicated USB3 host controller/transceiver chip like the TI TUSB1310A as well as several added passive components. This representative chip actually contains a secondary USB2 controller/transceiver for compatibility, has a peak power comsumption of nearly 0.5W while operational and requires 3 different power supplies. The chip alone is over 12x12mm square. More important, Apple’s SoC (system-on-a-chip) design means that their main chip has to implement the transceivers’ dual data interface and various control signals; several dozen of the chip’s 175 pins would connect to the SoC, needing additional PCB space.

In contrast, support for Lightning will probably need no extra chips and less than a dozen extra pins on the SoC; 8 of these will go straight to the connector. One or two of the pins will probably sense which kind of adapter or charger/cable is connected and the others will go, in parallel, to the power controller – switching them will allow enough current for charging without overloading any particular pin. Any current-hungry drivers, signal converters and so forth wouldn’t on the motherboard at all but inside the plug itself, further reducing cost and power consumption for the bare device. The interface may even be self-clocking, with a clock signal routed to the connector whenever necessary, so future system could implement higher speeds.

All this flexibility means that, probably, the Lightning-to-USB2 cable included with the current devices has a driver/signal conditioner chip embedded inside the plug. Any standard charger may be used but Apple’s charger already has a way of signaling its capabilities to the device. The Lightning-to-30pin adapters have, in addition, at least a stereo D/A converter for line audio out and. Upcoming HDMI and VGA adapters will receive audio/video signals in serial form and convert them to the appropriate formats. Third parties can, should Apple’s software allow it, build audio inputs, digital samplers, video-in or medical instrumentation adapters. All this without impacting users who need none of these adapters, but just want to charge their device.

Could all this also be done with USB2 or 3? Well, in principle, yes, but at a cost. Other manufacturers are using, for instance, MHL, which passes audio/video over the USB connector. However, this simply repurposes the connector itself – you need a separate controller chip to generate the MHL signals while the USB controller is turned off, unless you make yet another non-standard plug. (Note that MHL adapters are in the same price range as Apple’s adapter, so nothing gained there either.)

Update: forgot to mention that, with the assumptions above, nothing but SoC speed restrictions preclude Apple from releasing a Lightning-to-USB3 cable in the future.

Update#2: my comments on the physical details of the connector.

Update#3: my final summary. Please comment there, comments here are now closed.

Boom!

36 comments

In keeping with recent meteorological themes – iCloud, Thunderbolt – yesterday Apple introduced the new “Lightning” connector for its mobile hardware. This will be the replacement for the venerable 30-pin dock connector introduced 9 years ago. I haven’t seen it in person yet, but here’s some speculation on how it may work.

First, here’s a composite image of the plug and of the new connector (which is, apparently, codenamed “hero”):

You can see that there are 8 pins and that the plug has a metallic tip which, from all accounts, serves as the neutral/return/ground. Since the plug is described as “reversible”, the same 8 pins are present on the other side and, internally to the shell, connected to the same wires. However inside the connector you can clearly see that the mating pins exists on one side only – presumably to reduce the internal height of the connector by a millimeter or two, at the expense of slightly better reliability and a doubled current capacity.

There are locking springs on the side of the connector that mate with the cavities on the plug, hold it in place, and serve as the ground terminal. The ground is connected before the signal pins to protect against static and the rounded metal at the tip (from the photos it seems to be slightly roughened) wipes against the mating pins to remove any dirt or oxide buildup. At the same, the pins on the connector (not on the plug!) are briefly shorted to ground when the plug is inserted or removed, alerting the sensing circuitry to that.

People keep asking why Apple didn’t opt for the micro-USB connector. The answer is simple: that connector isn’t smart enough. It has only 5 pins: +5V, Ground, 2 digital data pins, and a sense pin, so most of the dock connector functions wouldn’t work – only charging and syncing would. Also, the pins are so small that no current plug/connector manufacturer allows the 2A needed for iPad charging. Note that this refers to individual pins; I’ve been told that several devices manage to get around this by some trick or other, but I couldn’t find any standard for doing so.

This takes us back to the sensing circuitry referred to. If one of the pins is reserved for sensing – even if it is the “dumb” sensing type that Apple has used in the previous generation, using resistors to ground – and two pins are mapped directly to the 2 USB data pins [update: I now think such direct support is unlikely] whether the USB side is plugged to a charger or to a computer’s USB port, and the other 5 pins can be used for charging current without overloading any single pin.

This also explains the size (and price) of the Lightning-to-30 pin adapter. It has to demultiplex the new digital signals and generate most of the old 30-pin signals, including audio and serial transmit and receive. The adapter does say “video and iPod Out not supported”; I’m not sure if the latter refers to audio out, though I’m now informed that the latter exports the iPod interface to certain car dashboards.

It’s as yet unknown whether Lightning will, in the future, support the new USB 3.0 spec – the current Lightning to USB cable supports only USB 2.0. This would require 6 (instead of 2) data pins, which is well within the connector’s capabilities. But would the mobile device’s memory, CPU and system bus support the high transfer rate? My guess is, not currently. Time will tell.

Update: deleted the reference to “hero” (I didn’t know it’s designer’s jargon). Also, for completion, I just saw there’s a Lightning to micro-USB adapter for European users, where micro-USB is the standard.

Update#2: good article at Macworld about Lightning. Also, Dan Frakes confirms that Apple says audio input is not included in the 30-pin adapter.

Update#3: found out what “iPod out” means, and fixed the reference.

Update#4: I added to the micro-USB paragraph, above. Thanks to several high-profile references (The Loop, Business Insider, Ars Technica – who also credited my composite picture – and dozens of others), I saw a neat traffic spike here. WP Super Cache held up well. The comments on all those sites are – interesting. 🙂 Of course whoever is convinced that everything is a sinister conspiracy by Apple won’t convinced by any technical argument, and I want to restrict myself to the engineering aspects.

Update#5: more thoughts (and some corrections) in the follow-up post. Please read that first and then comment over there.

Photos licensed by Creative Commons license. Unless otherwise noted, content © 2002-2024 by Rainer Brockerhoff. Iravan child theme by Rainer Brockerhoff, based on Arjuna-X, a WordPress Theme by SRS Solutions. jQuery UI based on Aristo.