Solipsism Gradient

Rainer Brockerhoff’s blog

Browsing Posts tagged History

State of the iPhone

No comments

So, half a dozen softwares are now out there that unlock the iPhone in various ways. In the simpler case, they allow the installation of various third-party applications and/or twiddling details. In the more complex case, they mess around with the various phone/SIM settings to allow the iPhone to be used with other provider’s SIM cards.

As I wrote before, Apple has apparently allowed this to happen by not implementing strict security measures. Now that the various unlocking techniques have stabilized, Apple has announced that an upcoming software update might cause “modified” iPhones to become “permanently inoperable”. Just a few days later, the iPhone update to version 1.1.1 came out; it featured the same warning in bold on its installation screen; and it did, indeed, cause some modified iPhones to lock up – the new vernacular is “bricked”, which I think somewhat of an exaggeration. Furthermore, the new software seems as tamper-resistant as the iPod touch software, indicating that Apple has checked out current unlocking techniques and implemented harder locks.

So far, all that was to be expected. What was, to me, unexpected was the reaction of some sectors of the press and of some users – mostly the same people who opposed the iPhone price cut, it seems.

Legally, it seems Apple is in the clear. The warranty and license agreement clearly say that any such tampering is at the phone owner’s risk. Surprisingly, some people seem to feel “entitled” to get warranty support even if they completely disobeyed the license! (Just as they felt “entitled” to have the price kept constant for a long period after they bought it, I suppose.)

The core of the argument seems to be “I paid for the machine, therefore I have the right to do whatever I want with it…” (I completely agree so far) “…and Apple has the obligation to give me full support, warranty and updates no matter how I mess with it!” Now here is where we part company. Sure, I suppose current consumer protection legislation may sometimes be interpreted that way (note I’m not a lawyer and less familiar with US legislation than with the Brazilian one); but you surely can’t pretend that Apple is a public utility or a non-profit charity.

Even from the technical standpoint, these expectations are unreasonable; allow me to explain this in more detail. The problem is one of “state“, in this case defined as ” unique configuration of information in a program or machine”.

In the first computers, the state of the computer was completely predictable when it was turned on: if it had Core memory, it was in essentially the same state it had when it was last turned off, and if the computer had reasonable power supply sequencing, you could just press the start button and continue. For more complex machines this was too hard to do, and the manufacturers declared that the machine was in an undefined/unreliable state after power-on, and that therefore you had to reload the software. For newer machines with semiconductor memory, everything was of course lost during power-off, and software reloading became equally necessary. To do so, you had to enter a short program in machine language using the front-panel toggle switches; this program, in turn, would read the actual software you wanted to run from a peripheral. This was the direct consequence of the machine coming up in a “null state”.

It wasn’t long before people thought of several ingenious ways to make this process more convenient. On the IBM1130, for instance, the hardware was set to read a special card from the punched-card reader, interpret the 80 columns (12 holes each) as 80 16-bit instructions, and execute them. The most commonly used of these cards simply repeated the process with the built-in cartridge disk drive, reading the first sector on the cartridge and executing it. Later on, the falling cost of ROM led to the boot software simply being built into the machine – the Apple II had a complete BASIC interpreter built-in, for instance. The apex of this evolution was the original Mac 128, where most of the system software was in the boot ROM – the system disk simply contained additions and patches. (The QI900 microcomputer I helped design in the ’80s had all system software, with windowing, multitasking and debugging, built into its ROM.) Here we had a well-defined “state” when the machine came up – it would execute a well-known program, and do predictable things, until external software came into play.

In the ’90s the limitations of this became apparent. OSes grew to a size beyond what could be stored in ROM, and no single Boot ROM could do justice to all models and peripherals (*cough* BIOS). Flash memory came up, the built-in software was renamed to “firmware”, and updates to that became commonplace. It was easy to “brick” a system if power went out or if you otherwise interrupted a firmware update before it was complete. In that event, a motherboard swap was usually the only solution, because the interruption left the firmware in a partial, nonworking “state”.

Consider now the iPhone. Its entire system (OS X 1.x) is built into firmware, mostly in a compressed state. This is expanded and run by the main ARM processor, obeying a built-in boot ROM. Supposedly, there are at least two more processors, taking care of network communications and of the cellular radio; each of these has its own boot ROM, and the radio processor has separate flash memory to hold state information regarding the SIM card, cellular system activation and so forth. One of these processors no doubt controls the USB interface to allow the main processor’s flash memory to be reloaded externally. Furthermore, every SIM card also has flash memory on it, containing the IMSI number, network identity, encryption keys and so forth, bringing one more source of complexity to the process.

In other words, you have a complex system of at least 3 processors interacting, each one with a boot ROM, two with flash memory containing state information. Powering up such a beast is a complex dance of each one waking up, testing its peripherals, checking its own state, then trying to talk to each other, then communicating to bring the entire system into a working state. Furthermore, the necessities of the cellphone system and of testing out such a complex piece of hardware mean that the iPhone must decide, on each power-up, in which of several states it’s in: factory testing, just out of the box, activated, reloading the main firmware, working, “plane” mode, and so forth. This is usually done by writing special values to reserved sections of the various flash memories, and of making sure they are always consistent with each other by checksumming and other technical arcana. Should they be found inconsistent, the system will probably try to regress to a simpler state and start over there, in the extreme throwing up its metaphorical hands and plead to be returned to the factory. Ideally, firmware writers strive to make it impossible to “brick”, unless an actual hardware defect occurs, of course; in practice, it’s rarely possible to envision all possible combinations of what could happen, and too few designers do assume a malicious agency is trying to trip them up at all times.

So, what do these various hacks do to unlock the iPhone? They rely upon bugs in the communications software, firstly, to make the system fall back into a state where it pleads for an external agency to reload its main firmware; cleverly substituted instructions then make it do new things. After several, progressively more complex, phases of this, new applications can be installed. Up to this point, only the main flash memory has been affected and installing a new software update will just bring the system back to the standard state. Now, one of the new applications may try to mess with the radio firmware; it will clear or set regions of it to bring the radio processor’s state out of step with reality, or even write bogus activation data into it.

Now, of course, the system’s state has been moved completely out of the state space envisioned by its designers. When it powers up, the state is sufficiently consistent – the various checksums check out OK, for instance – for the various processors to confidently start working. However, a few actual values are different from the intended ones – enough to let a different SIM card work, say. Now, if the hackers had the actual source code and documentation available, all this could be done in a reliable way. But this not being the case, they had to work by testing changes in various places and observing what happened, clearly not an optimal process.

Consider, now, the software update process. It assumes that the iPhone’s various processors and firmware(s) are in one of the known states – indeed, this is required for the complex cooperation required for uploading new software. If this cooperation is disrupted, the update may not begin – leading to an error message – or, worse, it may begin but not conclude properly. At this point, one or more of the iPhones processors may try to enter a recovery routine, either wiping the flash memories or to reinitialize them to a known state. No doubt this will be successful in most cases, and the new update will then be installable on a second attempt. However, the recovery may fail – since the exact circumstances couldn’t be foreseen – or it may be assuming false preconditions (like, a valid AT&T SIM card being present). The system will probably try to recover at successively lower states until falling back to the “can’t think of anything more, take me back to the factory” mode; or it may even lock up and “brick”.

Should Apple’s firmware programmers have tried to prevent this from happening? Well, up to a point they certainly did, as many problems other than hackers can cause such errors – electrical noise, badly seated or marginally defective SIM card, low battery, for instance. The system has to fail gracefully. However, it’s certainly not reasonable to expect them to specifically recognize and work around (or even tolerate!) the various hacks; after all Apple’s contract with AT&T certainly requires them to evidence due diligence in preventing that.

Firmware for such a complex system evolves continuously. The new 1.1.1 iPhone software seems to do many things differently from the original version, even though much of the UI is the same; same goes for the iPod touch software. Neither has been hacked as I write this. Did they now put TrustZone into operation? No idea; time will tell. My hunch is that Apple will eventually come out with an SDK for third-party applications sometime; the question is when. Perhaps after Leopard, perhaps at the 2008 WWDC. Does Apple need AT&T, or any partner carrier, at all? Maybe for now they do, and the unlocking wrangle will continue. In the long run, Apple will be better off with a universal phone that will work anywhere; possibly we’ll have to wait for the current generation of carriers to die before this happens. Interesting times.

Oldie but goodie

No comments

It’s been about 38 years since I first saw a somewhat simpler version of this:

Tree SwingDark Roasted Blend for reminding me.

Update: many more details about the drawing. Its origins still appear to be obscure.

I’ve been getting some positive feedback lately about my “Interesting Times” articles, so I thought I’d repost some pointers to them. The column itself is, sadly, now defunct, but new material crops up now and then; I’ve decided to post it here instead. In retrospect, the way this blog/forum is organized could use a few revisions, but that’s not likely to happen very soon.

So, “Highly Advanced But Obsolete” talks about the QI900, which was an 8-bit CP/M-based computer I helped design in the middle-80s:

…the Z80 was too slow for a fully graphical interface, and we hadn’t the mechanical know-how to build a mouse.

…here’s the final result: the QI-900 had menus…

…and moveable windows…

…and, even better than the original Macintosh, it had preemptive multitasking – or rather, multithreading inside the same application.

I promised a follow-up article with more details, but never had the time to do the necessary research. Maybe later in the year.

Everybody’s favorite seems to be, however, “This Internet isn’t worth anything…“, where I tell some stories about setting up a commercial ISP in the early 90s:

(At Embratel – that was the government’s telecomm monopoly)

Me: “I want an Internet connection.”

Embratel Salesman: “OK. I suggest a 2400 or 9600 link, the price will be X cents per packet. That’s 20% of what it costs to send a TELEX. Isn’t that revolutionary?”

Me: “A packet means how many Kbytes?”

Embratel Salesman: “What? It’s 64 bytes per packet!”

Me: “And if a user decides to download a larger file, say, 500 Kbytes? It’ll cost hundreds of dollars!”

Embratel Salesman: “Don’t worry, that will never happen!”

Recently, Colin “Hairy Eyeball” Brayton (also known as the Gringo Sambista) pointed me at a post by Marcelo Tas called “The First Networked Brazilian”. Unfortunately there are no permalinks on that blog, so here’s a link to the story with translations.

(in 1988…) Look here, Dr. Big-Shot, I say, the Internet is a giant network that’s going to link up all the computers in the world. Computers at home, computers in big companies … Everybody’s going to be able to swap messages instantaneously from any place in the world without leaving home or getting up from their desk! A long silence indicates that the businessman is not much taken by my fancy tale. Lunch ends, the desert arrives, followed by the coffee … As we’re leaving, I slink off without another word. As soon as he’s alone with his subordinate, the boss turns to my friend and says: The next time you waste my time on one of your dope-smoking artist pals, you’re fired!

The idea of a world-wide network really took some time to be understood, especially here in Brazil. Around the same time (1988), my ex-colleagues at UFMG (the Federal University of Minas Gerais) kindly offered me an Internet access account. At that time, the Internet was intended solely for academic and military purposes; Brazil had only two connections to the rest of the world – one at 9600 bps (bits per second!!) in Rio de Janeiro and a double-9600 bps connection in São Paulo. A year after that, the RNP (Rede Nacional de Pesquisa – National Research Network) was formed, with a 64 Kbps link.

I logged in over a single external phoneline that traversed the university’s archaic PBX system and entered a 2400 bps modem. A Unix prompt came up and I could access e-mail, FTP servers, Gopher and WAIS. As the line rarely remained stable for a long time, I would copy everything to a local file and read it later. On my side, I ran a terminal emulator called ZTerm on My Mac SE, together with a Supra modem. In 1990 I actually donated a 14400 modem to the university, but they gave it back a few weeks later, saying it was incompatible with the 1200/75 baud modems used by some other users…

Remember that all this was before the Web was invented. In the early nineties, in the USA, everything was somewhat fragmented. Even academic and research institutions were divided between Bitnet and the Internet. There were lots of BBSes, which were organizing themselves into networks like Fidonet, there were Usenet newsgroups, Apple already had its AppleLink network, and there even were some commercial networks like Byte Magazine‘s BIX, The Well, Compuserve and MCI. For me, the most rewarding were the e-mail lists. The two I found most interesting were the Computer Underground Digest and UNITE (User Interface to Everything). The latter list discussed what would be the preferred user interface for the Internet in the near future; one of the participants was Tim Berners-Lee, the inventor of the Web.

I actually tried to sign up to be a Web beta-tester, as a version of the Mosaic browser had just come out for the Mac, but unfortunately my 2400 bps link was deemed insufficient. I started investigating options such as leasing a commercial Internet connection, or even set up my own Internet provider. Here are some tidbits from those years:

Me: “I want to set up a commercial Internet provider.”

University employee: “That’s absurd! The Internet is reserved for research institutions, we’ll never let companies use our infrastructure!”

(Embratel was the government’s telecomm monopoly)

Me: “I want to set up a commercial Internet provider.”

Embratel VP: “What’s an Internet?”

Me: “The world-wide computer network, it interconnects all other networks!”

Embratel VP: “Never heard of it, but I’ll investigate and have one of our people call you.”

(Some months later…)

Me: “I want an Internet connection.”

American ISP: “OK, that’ll be US$15000 for the installation of the parabolic antenna infrastructure and US$6500 per month for a 128 Kbps connection.”

Me: “Gulp!”

American ISP: “But, for Brazil, I hear a company called Embratel has a monopoly on that sort of thing!”

(Some more months later…)

Me: “I want an Internet connection.”

Embratel Salesman: “OK. I suggest a 2400 or 9600 link, the price will be X cents per packet. That’s 20% of what it costs to send a TELEX. Isn’t that revolutionary?”

Me: “A packet means how many Kbytes?”

Embratel Salesman: “What? It’s 64 bytes per packet!”

Me: “And if a user decides to download a larger file, say, 500 Kbytes? It’ll cost hundreds of dollars!”

Embratel Salesman: “Don’t worry, that will never happen!”

Around the end of 1993, RNP officially opened up the way for commercial providers in Brazil. Here I go again:

Me: “I want an Internet connection!”

Embratel Salesman: “OK. I suggest a 2400 or 9600 link, the price will be X cents per packet…”

Me: “Hey, let’s not repeat that again, I want a fixed-price 64 Kbps link!”

Embratel Salesman (after several phone calls): “OK, it seems to be a new service, that’ll be US$4000 per month. Next month we’ll install it for you.”

Me: “Gulp! OK. Where do I sign?”

Me: “I want 12 phone lines!”

Phone Company Salesman: “What model is your PABX?”

Me: “There’s no PABX, it’s for Internet access!”

Phone Company Salesman: “Never heard of it!”

It took some time, but I finally succeeded in establishing MetaLink, first as a BBS in 1993, then as an Internet access provider in 1994. We bought a dozen 14400 baud modems, a Cisco 2511 router and a Mac Quadra 900 as server. After N+1 problems with the phone lines, with the router connections (I had to import a connector and solder an adapter for the “Embratel Standard” modem), with overheating equipment and so forth, we were on the air. We started closing deals with companies in other states to export our provider model as a franchise. End of all problems? Did I become a dot-com-millionaire? Far from it. Hear this:

Me (on the phone): “Hello! Would you like to install the Internet at your company?”

Company Owner: “No. What’s that?”

Me: “You’ll be able to communicate with your clients, publish your catalog…”

Company Owner: “The clients should come to us, and our catalog is confidential! Bye!”

Me (on the phone): “Hello! Would you like to install the Internet at your company?”

Company Owner: “Hmm… well… maybe. How much does it cost?

Me: “X per month for a basic account. This doesn’t include your phone costs, of course.”

Company Owner: “What, that expensive and I still have to pay for a phone??? No way! Bye!”

Me (at a company): “Hello! I’m here to install your Internet connection.”

Company Owner: “OK. Install it in this computer here.” (takes me to a computer in the middle of the room.)

Me (looking around)”: “Hmm, I can’t see any phone around here…”

Company Owner“Phone? Whatever for? My employees have more important things to do!”

Me (looking at the computer): “To connect to the modem, of course… but this computer doesn’t even have one!”

Company Owner“And it won’t have either! I don’t want this Internet thing anymore! Bye!”

Me (on the phone): “Hello! Would you like to install the Internet at your company?”

Company Owner: “Yes!”

Me (wary): “You’ll need a phone line, a computer with a modem, and the phone line charges are not included. Do you still want it?”

Company Owner: “Of course, I’ve got all that. You can come and install it.”

At the company, I see an old-time Parks 1200/75 modem, the size of a VCR.

Me: “Look, this modem is obsolete. You’ll need at least a 14400 bps modem!”

Company Owner: “You’re nuts, I’ve been using this modem to communicate with my bank for 5 years, it works very well and I won’t change it! Bye!”

Client (on the phone): “Your Internet isn’t worth anything! The connection drops all the time and often doesn’t even start!”

Me: “Under what circumstances, for example?”

Client: “You want to see? Just look!!” (noises of a dialing modem)

Me: “Ah, but while someone’s on the phone the modem can’t communicate, that’s normal!”

Client: “That’s absurd! You want me to buy another phone line, is that it? You can cancel my subscription to this @#$%^!! Bye!”

Client (on the phone): “I deleted your software because it was taking up too much space, and now I can’t get onto the Internet anymore! That’s absurd!”

And so it goes… for some years it worked reasonably well, but user support started using up more and more resources and the operating costs weren’t falling as fast as I had thought they would. Finally, when large companies such as banks and newspapers started to build access providers with hundreds of lines, I redid my spreadsheets and deduced that there was no more money to be made with dial-up access providers. I sold my stake. MetaLink still went on for some years until it was absorbed into a larger company. But it was fun while it lasted…

(clique aqui para ler este artigo em português)

Not catching up.

No comments

A few years ago, when I was still getting the idea of this blogging thing, I made a serious effort to stay “in the know”. I read most of the Mac websites, I had 390+ feeds in my news reader, I posted links to interesting blogs, I tried to comment on the hot issues of the day. No idea how successful that was (depending on your definition of “successful”), but one side-effect became apparent after a year or so: no useful work on my applications got done. There are just so many nanoseconds in a day.

Perhaps the main cause of that was my overly-zealous polishing of each sentence – writing in what is, after all, my fourth language isn’t that easy – but if there’s any obsessive-compulsive polishing that must be done it would be better applied to my code than to my text. Right? On the other hand, there are people who tell me they like reading what I post here, if only to keep up-to-date with my trips. And the whole thing was, after all, just a sideshow to my support forums… no sense in closing it down.

So, I’ll probably not comment after the fact on most of the various issues du jour here… there have been an awful lot of them lately. I won’t even take the trouble to find links to them now. Let’s see, there was the AirPort security thing, the HIG-is-dead/Disco thing, the MacHeist controversy, the iPhone came-out-but-not-really flap, the options scandal is still going on, I still can’t comment on Leopard, yadda yadda.

My late father worked at a large company and he used to tell with some relish a story about how he used to sort the requests that crossed his desk into “not urgent”, “normal” and “extremely urgent” piles. His usual mode of operation was to ignore the “extremely urgent” stuff until someone asked after a particular item at least twice; it turned out that most of them were never followed up at all! The lesson has served me well. Many of those hot issues have a short half-life, emitting lots of sparks but decaying very soon into plain, dull lead. Nothing like letting a few weeks or months pass to get the proper perspective…

In the meantime, yes, suddenly I’ve been able to get lots of polishing done on XRay II. Keep tuned.

Disk Image gallery

No comments

I just found a very nice and interesting gallery of disk images. I decided against using disk images early on when I found that, more often than not, some local setting of the user’s Finder would make the icons move around, usually confusing less experienced users.

But these look so nice, I may change my mind now; I wonder if it might be worth it to file a bug suggesting that icons on locked disk images should snap back into their positions.

Many thanks to Jim Matthews of Fetch Softworks for remembering that I had the original idea of putting a symbolic link to /Applications on the disk image. I now see that most people are simply putting in a Finder alias instead of a symlink… I’m not so sure this is a good idea. It works quite well in Tiger, but it will store your boot disk name at the very least, and perhaps even break if the user has a secondary volume which has that same name.

So, it’s 6-6-6 in whatever date ordering you prefer, and this is supposedly the Number of the Beast. Or perhaps not. Do I care? Not really…

…it’s my birthday, however. This specific birthday is my 37th (in hexadecimal of course icon_wink.gif)and it’s a rare one, in that I’m not away on a trip. Last year my birthday present was a surprise, all right: I was at the WWDC keynote and listened to Steve Jobs announce the Mac-Intel switch. Later in the day I tripped on a San Francisco sidewalk and, fortunately, suffered no serious harm.

Hopefully, today will bring no serious surprises either way, and I’m looking forward to the positive ones that this year’s WWDC will bring. More details in a month or so…

hmafra wrote:

He did it again! One of his takes now is the kernel thing. Speed, he says.

What he writes makes some of sense, like the part on cross-licensing agreement. I still don’t buy it, though.

I was about to comment on that.

I checked with some friends who know more about the kernel, and they say he’s completely wrong. In fact, there are two myths at work here. The first one says that Mac OS X uses a Mach microkernel, which is wrong. XNU, which is the Mac OS X kernel, is effectively monolithic as the whole BSD stuff runs right alongside the Mach stuff in the same context. The Mach code takes care of memory allocation and thread scheduling, the BSD code does most of the rest. None of the switching that would make a pure microkernel inefficient. Granted that there are some kernel functions which are slower than the equivalent calls in, say, Linux; but this just means that Mac OS X isn’t currently suited to huge server farms, and that Apple can tinker with this if necessary without switching kernels at all. In fact, they’re probably already doing this with the Intel versions of Leopard.

The second myth is that only Avie Tevanian was holding Mach in place by sheer orneriness, and that now that he’s gone, everybody will heave a sigh of relief, throw Darwin out, and shoehorn Linux (or even XP) into its place. That too is completely wrong. Bertrand Serlet has been in charge of the OS policy for at least two years now. And consider that XNU, because of the Mach component, is well-suited to scale to a larger number of processors. And consider that Intel is coming out with new chips that supposedly will scale well to 4, 8 or even more cores…

The idea of Leopard implementing the Windows API is, at first look, interesting. (Let’s discard the misinformation about “Microsoft saving Apple”, and that the cross-licensing included the Windows API.)

After all, Mac OS X already has several APIs for writing applications. BSD with X11, Carbon, Cocoa, Java, and so forth. Why not an additional one? Well, it’s possible in theory. In fact, the WINE people are working on such a thing. However, why should Apple make it too easy to move applications into Mac OS X? Such apps would never be full-class citizens, the appearance would be awkward, drag&drop would probably be impossible… no, virtualization is the way to go. Running Windows inside a secondary window would also be a constant reminder of which environment is the native one, which is more in Apple’s interest.

Photos licensed by Creative Commons license. Unless otherwise noted, content © 2002-2020 by Rainer Brockerhoff. Iravan child theme by Rainer Brockerhoff, based on Arjuna-X, a WordPress Theme by SRS Solutions. jQuery UI based on Aristo.