Solipsism Gradient

Rainer Brockerhoff’s blog

Browsing Posts published by Rainer Brockerhoff

A few days ago, at the monthly CocoaHeads meeting here in Belo Horizonte, I was asked to do a brief talk. Since I didn’t have anything ready on short notice, I said I could show off my SwiftChecker app and do an informal Q&A about Swift.

It turned out that most of the attendees were new to Cocoa programming and none had yet done anything in Swift, so I mostly confined myself to the Q&A part. Here are some of the questions — not necessarily in order, and expanding somewhat upon my answers.

Q: we’ve landed a contract to do an iOS app. Won’t it be faster or more efficient to learn Swift now, instead of Objective-C?

A: absolutely not! (echoed by all present that had apps published) You should learn Swift now, it’s a new thing; we’re all starting out together, you’ll gain experience and you can even submit suggestions. But write apps for yourself, nothing that depends on a deadline. In any event, you won’t be able to deliver anything until the final Xcode 6 is out in a few months. In any event, in any real-world app, some parts may remain that are better done in Objective-C; and to understand Cocoa and other frameworks, Objective-C is a necessity. Ask me again in two years — it may be different then.

Q: is Swift interpreted like JavaScript, or compiled to a virtual machine like Java, or native?

A: it’s native. Compilation is a little unusual in that you have a simple parser in front, then you get an intermediate representation which goes to a front-end optimizer — this converts all syntactic sugar into library calls, and most of the language is implemented in the library — then that outputs the standard LLVM intermediate representation, which then goes through the same back-end optimizer and code generator that Clang uses. But at the end it’s native. The REPL/playground does some of that to fool you into thinking it’s interpreted, but that’s mostly for learning and trying out things.

Q: so how much do you use the playground? Is it true that it’s still unstable?

A: yes, it’s unstable and there are bugs inherent to the mode the playground compiles stuff, so I’ve never used it. My style is to start with a very simple app — even a command-line app — and gradually build it up.

Q: I began learning Objective-C a year ago and I still haven’t got used to the brackets. What do you similarly dislike in Swift?

A: I remember getting used to the [ ]s in two weeks after I discovered they could be nested; what I never got used to, even after 14 years, is that method and function declarations used different syntax — at least they fixed that! In Swift what I find strange is using dot syntax everywhere. I was a late adopter of it in Objective-C, and usually for properties only. I’m still typing semicolons and backspacing immediately, or putting the type first in declarations, but so far that’s just my habits, not an annoyance with the language.

Q: the first thing I noticed in beta 1 was that there was no private.

A: well, they put that in now; I suppose it’s important to you guys who work in teams, but I don’t like hiding things from myself. (Someone: “and exceptions?”) I’ve never needed exceptions for my applications, but better error handling is supposed to be coming soon.

Q: which is more fun to write in, Swift or Objective-C?

A: depends on how you define “fun”. Writing Swift takes some learning and you have new, powerful constructions like extensions and optionals — but in Objective C you can do fun things with dynamic dispatching or go down into C and do tricky things with pointers and memory. Ideally you should learn both. In general I’m in favor of learning also the lower levels, even machine language; it will help you with debugging.

Q: do you think Swift means that Apple is now abandoning imperative languages and adopting a more functional programming approach?

A: I think those are largely academic concepts — not in the sense that they’re unimportant, but in the end it comes out to what you need in practice to do a specific job. When I studied computer science, “structured programming” was the fashion of the day. Later on “object orientation” arrived and contained many of the older precepts. Now “functional programming” is in fashion, but Swift still has objects (and even structures in the sense of if/then/else, for and while loops, etc). So you can adopt different fashions when programming in Swift, but pragmatism is very important — use them only where appropriate.

Expanding upon that. Swift certainly isn’t a pure functional language to the extent that you can say that of Haskell (read this excellent article for details). Swift isn’t a pure object-oriented language either, neither in the C++ sense, nor in the Objective-C sense, nor in the Java sense – it has both structs and classes, both static and dynamic dispatch, but its functions, closures and generics allow you to do some functional stuff. It’s not a descendant of C because it has no pointers, no header files, no macros. It’s not a descendant of C++ because generics aren’t templates, although they use <>s. It’s not a JavaScript/PHP-like scripting language — I was shocked to hear someone describing it as such, just because it has type inference and a REPL.

So the answer is, it’s a new language, it tries to be very consistent and pragmatical, and it has imperative, functional, and object-oriented features.  It can’t be all things to all people, at least in the first versions. Don’t try to fit your existing patterns into it, rather build and learn new patterns — of course there are some rewriting their Scala patterns (or STL, or whatever) in Swift so they can go on using the names/patterns they’re used to, but that’s like moving to an exotic country and asking for your usual breakfast: unrewarding in the long run.

Sidenote: I’d never heard of functional programming until a few years ago, when I had to learn about map/reduce for an interview with Google; when that didn’t work out, I forgot all about it until Swift came out. It’s quite interesting (see this article, for instance) and I see how it can be useful for many things; still, in Swift you’re free to mix all these paradigms, which is quite enough for my purposes.

My first reading of the Swift books and of the generated Swift library header (remember, you can see the header by command-double-clicking on any Swift type) left me quite confused about the proper way to write generics and extensions to generics.

As usual, I favor learning-by-doing for such things [skydiving joke omitted] — I paged through a book on “Type Theory” to no avail — so, in a recent update to my SwiftChecker app I tried to include a useful extension.

Here is a (slightly edited) part of it:

public extension Dictionary {
	/**	Merges a sequence of values into a Dictionary by specifying
		a filter function. The filter function can return nil to filter
		out that item from the input Sequence, or return a (key,value)
		tuple to insert or change an item. In that case, value can be
		nil to remove the item for that key. */
	mutating func merge <T, S: SequenceType where S.Generator.Element == T>
		(seq: S, filter: (T) -> (Key, Value?)?) {
		var gen = seq.generate()
		while let t: T = gen.next() {
			if let (key: Key, value: Value?) = filter(t) {
				self[key] = value
			}
		}
	}
}	// end of Dictionary extension

To explain this, first recall that types can include typealias declarations within their definitions; this is much used for generic types, but can also be used everywhere. Here’s the declaration for Sequence from the library header:

protocol SequenceType : _Sequence_Type {
    typealias Generator : GeneratorType
    func generate() -> Generator
}

and of the GeneratorType protocol it uses:

protocol GeneratorType {
    typealias Element
    mutating func next() -> Element?
}

Going backwards: the GeneratorType protocol defines an associated type with the typealias declaration, in this case, Element. This is a placeholder type that will become an actual type when the protocol is adopted. These associated types form a hierarchy, so that you can refer to GeneratorType.Element and, further up, to Sequence.Generator.Element, and so forth.

Now look again at the merge method I define in my extension to Dictionary:

	mutating func merge <T, S: SequenceType where S.Generator.Element == T>
		(seq: S, filter: (T) -> (Key, Value?)?) { // ... etc.

Inside the <> after the method name are the type constraints. When I call merge on a Dictionary, this means that:

  • T is a placeholder type (with no constraints) used for the rest of the definition
  • S is another placeholder that is constrained to conform to the SequenceType protocol, and further (by using the where clause)
  • the generate() function for S must return a Generator whose Element is equal to T, and also
  • T is inferred from the argument type of the function/closure passed as the last argument to merge, and finally
  • this function/closure must return an Optional tuple of type (Key, Value?).

But what is this (Key, Value?) type, then? Recall that we’re extending Dictionary, so that type’s associated type hierarchy is also available; checking the declaration in the library header, we see:

struct Dictionary<Key : Hashable, Value> : CollectionType, DictionaryLiteralConvertible {
    typealias Element = (Key, Value)
    typealias Index = DictionaryIndex<Key, Value>
    ... etc.

so our type constraint will match Key and Value to those pertaining to the particular Dictionary that merge is being called on! To be more clear, we could even write (Dictionary.Key, Dictionary.Value?) in our definition.

Let’s look at a specific example of that. Notice that, where in the type constraints we refer to the placeholder type names (on the left of the typealias declarations), we refer to specific types wherever one is defined.

let strs = [ "10", "11", "12", "xyz" ]	// this is an Array  and therefore
					// its Generator.Element is String
let dict = [ 0:"0" ]	// this is a Dictionary <Int, String> and therefore
			// its Key is Int and Value is String
dict.merge (strs) { (s) in	// s is inferred to be strs's Generator.Element,
				// that is, a String
	let v = s.toInt()	// v is an Optional , will be nil for "xyz"
	return v ? (v!, s) : nil	// returns a valid tuple or nil
}
// dict will now contain [ 0:"0", 10:"10", 11:"11", 12:"12" ]

You can verify that everything matches our type constraints either directly or by type inference. If you change anything that doesn’t fit — say, by declaring s as anything but String inside the closure, or changing the type it returns — you’ll see a syntax error. (Assuming, of course, that no other extension has matching constraints.)

That said, currently (Xcode 6.0b5) the library header still has some bugs. Specifically, many type constraints either have too many prefixes removed, or show the wrong hierarchy, so you’ll see things like <S : Sequence where T == T> which won’t compile if you copy & paste it. No doubt this will be fixed soon.

Update: oops. Fixed the return, above.

Update#2: updated for Xcode 6.0b5 — many of those typenames either lost or gained a Type suffix.

A little over a month has passed since Swift came out, and I’ve just pushed a substantial update of my SwiftChecker app to GitHub. (Update: while writing this, I discovered a bug and pushed an updated update!)

While I still have to go back now and then to delete semicolons and to correct “String s” to “let s: String” (!@#$% muscle memory!) I finally can write out dozens of lines of Swift code without going back to the documentation or, worse, spend hours on the Internet checking why the compiler is balking. I suppose that means I’m assimilating the new syntax. It’s been over a decade since I had to learn a new language, so this isn’t too bad. 🙂

Looking at my new code, I’m particularly impressed by the conciseness of the language — the interaction with Cocoa APIs is still quite verbose, of course, but once I got used to type inference, the various map/filter/reduce/join functions, and of course generics, type extensions and operator declarations, I saw new ways of refactoring my code to be both more concise and more understandable.

In my almost 14 years of writing ObjC code I experimented with adding categories to existing classes but for the most part it always felt fragile and, after the initial experimental period, I went back writing subclasses or container classes. Using C macros to simplify stuff was tricky and error-prone. In Swift, by using generic types, it’s possible to write extensions and operators such that they don’t conflict with the existing implementation and this feels much more safe and natural. Although, of course, you still can unintentionally conflict with future additions to the Swift Library, namespaces should mostly take care of that.

Generics and type safety are a great help once I got used to them, which admittedly took some futzing around — the compiler is still very sensitive and the error messages are often cryptic or downright wrong. The type constraints and matching used for more complicated generics, admittedly, can become very complex — especially to someone not current on this new-fangled “type theory”. Still I was able to understand and build some simple generic extensions and functions that could be safely factored out into a few lines and greatly simplify other parts of my code.

Regarding optionals, I quickly got used to them; the old “returning nil” dance is now much less error-prone and at every step it’s clear which type is being handled and when it can be nil or not. The code is more readable in that regard, even if some nested “if let” statements have to remain when handling Cocoa return values. And ARC is practically transparent in Swift.

Speaking of values and ARC, the most troublesome parts are still those APIs — like the Security framework APIs I use in SwiftChecker — which haven’t yet been properly annotated. Certainly this will take many months, but no doubt pending updates to Clang will also take advantage of that, so that we’ll finally get rid of those pesky bridging casts.

One other positive side-effect is that the new whitespace rules, while mostly aligned with my own preferences, are finally making me insert spaces after commas, colons and around infix operators, and take other measures for increased readability.

All in all I’m very optimistic about Swift’s future. Can’t wait for the next Xcode beta to come out; the rumor mill says tomorrow, so let’s hope!

Well, finally I’ve decided to try out this newfangled GitHub stuff. Here’s my first repository: SwiftChecker, a simple real-world OS X app written in Swift. Quoting the readme file:

 My main intentions were to learn something about having parts in Swift and parts in ObjC; also to translate some of my experience with asynchronous execution to Swift. Since GCD and blocks/closures are already a part of system services and the C language (contrary to some people who claim they’re ObjC/Cocoa APIs), I found that it’s easy to call them from Swift either directly or with some small convenience wrappers.

The application displays a single window containing a table of running applications/processes (user space only).

For each process, it displays the icon, name and containing folder, as well as sandbox status (if present) and summaries for signing certificates (if present).

The table is not updated automatically, but there’s a refresh button to update it. A suggested exercise would be to listen to notifications for application start/stop and update the list dynamically.

Updating the table might potentially take some time if the system is very busy, since code signatures and icons will probably have to be loaded from disk. To speed this up, a simple “Future” class is implemented and used to perform these accesses asynchronously. In my timing tests, this accelerates table refresh by just under 4x — quite fair on a 4-core machine.

The project should build and run with no errors, warnings or crashes on OS X 10.10b3 and Xcode 6.0b3.

There are copious comments that, hopefully, explain some of the design decisions and workarounds where necessary.

I’ve also updated my previous post (Swift: a Simple Future) to incorporate some changes and simplifications I made during development of SwiftChecker. Check out the Future.swift file from the repository for a compilable file, with many comments.

While doing research for my next Swift article, I implemented a very simple future class, which might be of interest:

import Foundation

func PerformOnMain(work: () -> Void) {
	CFRunLoopPerformBlock(NSRunLoop.mainRunLoop().getCFRunLoop(), kCFRunLoopCommonModes, work)
}

func PerformAsync(work: () -> Void) {
	dispatch_async(dispatch_get_global_queue(0, 0), work)
}

class Future <T> {
	let lock = NSCondition()
	var result: [T] = []
	
	func _run(work: () -> T) {
		PerformAsync {
			let value = work()
			self.lock.lock()
			self.result = [ value ]
			self.lock.broadcast()
			self.lock.unlock()
		}
	}
	
	init(work: () -> T) {
		_run(work)
	}
	
	init(_ work: @auto_closure ()-> T) {
		_run(work)
	}
	
	var value: T {
		lock.lock()
		while result.isEmpty {
			lock.wait()
		}
		let r = result[0]
		lock.unlock()
		return r
	}
}

It’s used as follows:

let futureString: Future<String?> = Future {
	// ... protracted action that may return a string
	return didItWork ? aString : nil
}

// alternatively, if you just have a function call or expression:
let anotherString: Future<String> = Future(SomeFunctionThatTakesALongTime())
// ... do other things, then when you need the future's value:
let theString = futureString.value

In other words, the closure will be executed asynchronously, then when you need the result, getting value will block until it’s ready. The result of the closure can be of any type and even (as in the first example) be optional (that is, may be nil). Once the result is ready, getting value will not block.

My interest in futures began with Mike Ash’s excellent article about an Objective-C implementation of the concept. Mike’s implementation (MAFuture) is of an implicit future; that is, it returns a proxy object which transparently stands in for the future value and doesn’t block when operations like retain/release are performed; therefore, you can add the future to an NSArray or NSDictionary while it’s still being computed. The downside is that, because of the intricacies of message forwarding in Objective-C, the future can’t return a nil value.

MAFuture has grown to be rather complex, now catering to iOS memory triggers, archiving and whatnot, so I made a very bare-bones variation for my own projects, where it has been very useful.

Swift has no proxy objects (yet?), so implicit futures can’t be done; I tried to code the proxy in Objective-C and subclass in Swift, but ran into too many compiler/runtime bugs — it’s a beta, after all. However, the explicit implementation above has the advantage that the future can return any Swift type, including optional.

An additional Future property that might be useful in some places:

var resolved: Bool {
	lock.lock()
	let r = !result.isEmpty
	lock.unlock()
	return r
}

you can use this to test if the result is available without blocking.

There are more complex futures available already. I looked at Swiftz (by Maxwell Swadling) and BrightFutures (by Thomas Visser), which also implement other constructs. If you have a functional programming background you should examine them, too.

Update: Mike Ash suggested a small change in my code above (have to broadcast before unlock), and confirmed that everything should work properly. Thanks Mike!

Update#2: some more small changes, added the autoclosure form and a PerformOnMain function that fills in for the Objective-C performSelectorOnMainThread: method – useful for updating UI at the conclusion of a future’s closure.

Update#3: final form, in line with the published sample app on github. There are extensive comments on details.

Update#4: slight change for the new Array declaration syntax in Xcode 6.0b3.

Over two weeks ago, Apple at WWDC announced something entirely unexpected: thousands of new APIs and a brand-new programming language, Swift. No hardware, of course; it’s a developers conference, remember?

Reactions varied all over the spectrum. Non-developers (especially “industry analysts”) mostly had no idea what it meant: they said Apple had announced “nothing”. Almost all developers, however, were ecstatic — “the most significant event Apple ever staged“. Regarding Swift, this initial enthusiasm diverged as soon as people read the (relatively sparse) documentation and actually began to play around with the language — a very early beta version was available for download soon after the announcement. Hilarity, chaos and pandemonium ensued; tension, apprehension and dissension had begun.

As usual, almost everybody tried to project their grievances, expectations and experiences onto the new language. The open-source advocates griped that no source was available. The cross-platform advocates complained that there was no version running/compiling for Android (as if Apple would have any interest in promoting that!). The Objective-C programmers unsuccessfully tried to translate their code into Swift and complained that there was only limited dynamic dispatching and introspection. The C programmers complained that there were no preprocessor macros and that Swift seemed to be “Objective-C without the C”. The Haskell/Erlang/Scala programmers complained that many functional programming facilities were missing, and that the language was “too mutable”. The Java programmers complained that the language was “too C++-like”, but resented the lack of exceptions. The C++ programmers also resented the lack of exceptions and wanted std::somethings. The Type Theorists complained that generics were “not generic enough”. JavaScript programmers… well, you get the idea. Almost everybody complained about Array mutability semantics, about missing semicolons and the parsing of whitespace, and (of course) said that the syntax “looked weird”. Serious fights erupted on Twitter, disagreeing on whether Swift was a “modern” language and what Apple’s intentions were.

And, as always happens, many people said, in effect, “OMG Apple you’re soo stooopid WTF fix this now!”. This is the usual symptom of looking at the surface and not understanding what might be happening underneath.

Voluminous disclaimer and sidenote with historical digressions:

Many of the complaints in the paragraphs above are condensations of what I understood people to be saying and none are meant to be actual live quotes — which is why I didn’t link to any specific instance. I’m not interested in discussing most of these personally right now, thank you.

I’ve been programming since 1969,  in C since 1984, in Objective-C since 2000. I wrote only one application in C++ back in the Classic days — it was pretty much mandatory in the CodeWarrior/PowerPlant days. I did my CS degree in the early 1970’s, when “modern” language still meant ALGOL 68 – see the mind-boogling official reference (large PDF).

When BYTE Magazine‘s special Smalltalk issue came out in 1981, I was very interested, but couldn’t come to grips with the weird syntax. I bought Adele Goldberg‘s classic books about Smalltalk — the blue book (large PDF), the orange book (large PDF) and the green book (large PDF) — and periodically tried to understand them; very difficult without access to a working compiler! In the late 80’s I put these aside (and, unfortunately, lost them in a move). After Apple acquired NeXT in 1996, I became aware of Objective-C’s roots in Smalltalk, but didn’t give it much thought.

Around 2000, restarting my work as an indie developer, I started programming in Objective-C and Cocoa. As an experienced C programmer I had little difficulty with Objective-C, and quickly got used to the nested [[ ]]s. I never wrote a full Carbon app as such. I also never managed to acquire a working Smalltalk compiler, even after a few became available on the Mac. However, a couple of years ago, I found the Smalltalk books in PDF format (as linked above) and was astounded: the formerly opaque things about methods, messages, dynamic dispatching, objects and so forth — suddenly all was clear and obvious! That’s the advantage of using-while-learning, at least for me.

Unlike many colleagues I never hesitated to go beyond Cocoa, always using CoreFoundation, BSD/Darwin and a variety of interfaces according to necessity, and once manual memory management became ingrained, tossing objects and buffers back and forth between the various APIs. Except for short utilities for my own use, I haven’t adopted ARC yet — I found too many edge cases for my established programming habits.

 So, back to Swift. It really appears to be a very pragmatic language. If you look at the generated library header (in Xcode, command-doubleclick on any Swift type to see it), nearly all operators and types are defined there, in often surprising detail. In other words, few language features are hard-wired into the parser/compiler – the Swift library/runtime and the pre-LLVM optimizer are, instead, responsible for the language and its implementation details, and therefore more easily twiddled if necessary.

This is, of course, very convenient for Apple: a small team could tinker around with all aspects of Swift while leveraging most of the existing LLVM infrastructure and keeping up with the latest changes in iOS and OS X. Indeed, in retrospect, it appears that Swift was even driving many of those changes!

Let’s look at a brief timeline to explain what I mean:

  • 2000-2002: Chris Lattner‘s masters thesis on LLVM;
  • 2005: Lattner hired by Apple; Apple uses LLVM for the OpenGL shading language in Mac OS X 10.5;
  • 2006-2008: Apple introduces experimental llvm-gcc in Xcode 3.1; “blocks” and GCD appear;
  • 2009: Apple introduces Clang as an alternative for gcc; OpenCL and Clang static analyzer appear;
  • 2010: Lattner begins working on Swift; Clang fully supports C++ and llvm-gcc is the default compiler;
  • 2011: gcc/gdb are discarded, Clang/lldb are defaults, ARC introduced in Xcode 4.2;
  • 2012-2013: iOS/OS X are fully built with the new infrastructure, Objective-C literals in Xcode 4.4;
  • 2013: Lattner becomes head of the developer tools department;
  • 2014: Swift comes out in Xcode 6.0.

The LLVM team (Lattner, Evan Cheng who is also at Apple, and Vikram Adve of UIUC) also received the 2012 ACM Software System Award, and of course, LLVM, Clang, LLDB are open-source projects being driven forward by many people who also deserve lots of credit.

Nevertheless, it’s tempting to see all this as Chris Lattner’s plan for world domination… just picture him stroking a white cat and going “mwahaha!” 🙂 [Update: Thanks to @darth for the illustration!]

But really, all this points to progress in Apple’s platforms being driven by a consistent plan to modernize and implement new technologies everywhere; even hardware was affected, as the Apple A6 CPU (and no doubt its successors) were designed in parallel with the corresponding LLVM code generator. Similarly, from 2009 forward, software advances like ARC, blocks, GCD, runtime modernizations etc. are now seen as preparing the ground for Swift at all levels.

Sidenote:

A few years ago I posted about Apple’s hardware options being enabled by LLVM, and with the A6 that has indeed begun to happen. Apple’s in position now to design their own CPU and just have to write a new optimizer backend for it — and switch architectures in new hardware without users, or even developers, noticing any significant change.

When I began studying programming languages and compilers, UNCOL was the holy grail of programming:  a universal intermediate language to adapt any high-level language/compiler to any machine architecture. LLVM is the first implementation of that.

What does all this mean for Swift? Contrary to what you may hear from some quarters, it’s not an amateurish, ham-fisted attempt at locking developers in to Apple’s “walled garden”. As Apple has said publicly, it’s a systems programming language that ties in to key Apple technologies. I don’t doubt that it’s already being deployed internally and we can expect to see key low-level frameworks — Security, dyld, IOKit are candidates which come to mind — rewritten in Swift as soon as feasible. In the long run, the kernel itself, Core Foundation and others may follow suit; picture “SwiftKit” unifying much of AppKit and UIKit. Making Swift available to developers at this beta stage is good policy but probably not Apple’s primary focus.

But, you may ask, why not use C++ or the hybrid Objective-C++? Why not use a “modern” cross-platform language? What was wrong with Objective-C anyway?

Well, there’s a reason so many low-level frameworks are written in C++ or pure C: runtime speed. Objective-C’s dynamic dispatching has vastly improved over the years but is still a bottleneck, and in 95% of cases is not really necessary — we rarely use id, and strong typing is encouraged everywhere. As for pure C code, when you look at it, there’s always tons of crufty #defines, tricks to avoid C’s legacy problems, spinlocks and stack arrays and overflow checks and… so it’s no wonder Apple decided to start anew with a new language that avoids all of those problems and still interoperates with Cocoa etc. — all while the infrastructure’s being changed underneath.

So, why not C++? Lattner is a C++ wizard, right? All of Clang/LLVM is coded in C++. So is WebKit, Apple’s other major open-source success. I can’t see that changing, and their effort to fully support all of C++’s experimental future features argues that it won’t change. But C++ doesn’t look like a good match for internal Apple technologies like GCD and ARC, and the C++ Standards Committee is certainly not interested in adopting those. On the other hand, judiciously adopting certain things like generics, operator overloading and optimized dispatching is certainly a good thing. And last but not least, Apple now owns/controls the entire toolchain and the systems programming language!

More later; I’ve started to write an entire application in Swift and after that may feel qualified to comment on language details. For now, I’m quite happy with the prospects.

[continued from part III]

So, here I was back in Brazil with my brand-new Mac 128. Of course, the first thing I did was to disassemble it — a tradition I kept up for almost three decades, until Apple’s increasing use of glue and special tooling began to make it too risky for some Macs (especially laptops and the latest iMacs).

The hardware team at Quartzil was as interested in the machine as I was, and we learned a lot from it. Remember that this was for our upcoming QI900 8-bit microcomputer. At that time (mid-1984) PAL chips, injection-molding and four-layer boards were new and too expensive for all but very large-scale production runs, and we had to postpone adoption on all those. Similarly, when we looked at the Mac’s video circuits, we found that it used a horizontal flyback transformer that worked at higher frequencies than any commercially available in Brazil. That, and the fact that (because of the lack of PALs) we had to fall back to the MC6845 video controller chip, meant that we had to keep close to the 24×80 character display standard; the final display resolution was 27×90, with the first two lines reserved for a menu:


the menus were opened by the corresponding function keys, with shortcuts accessible by the special “QI” key. Notice the special “Edit” menu with “Undo”, “Copy”, “Paste” and “Delete” equivalents — sound familiar? 🙂

My Mac was used extensively for the QI900 design. All of the documentation was done in MacWrite/MacPaint (later, in WriteNow). I used a quite primitive C compiler (from Aztec) to write utility programs; one to optimize the MC6845 parameters to stay within certain constraints, another one to design the QI900 character set, which used an extended MacRoman encoding to allow accents and frame/window/menu-drawing characters. The “extended” part was also necessary because Apple’s original encoding didn’t include capital accented characters. The character set was then sent over one of the serial interfaces to an EPROM burner, and a copy was saved on the Mac itself as a FONT resource file. Unfortunately, all of these old files are still in my backups, but no longer readable — at a later time, they were encoded with DiskDoubler and, beyond that, were originally in long-obsolete file formats.

Subsequently I met other Mac users at a huge computer industry event in São Paulo; most important for my immediate future, the team from Unitron were there with their successful line of Apple II clones, and we talked about their plans for doing a Brazilian Mac clone. More about this (hopefully) in the next chapter.

My 2012 series of posts about Apple’s Lightning connector was (and still is!) the most-visited material here on the Solipsism Gradient: over 120 thousand visits so far, and counting. Most comments elsewhere about the posts have been positive.

Several of my surmises about the connector have since been confirmed; my main miss was that I supposed all 8 pins to be dynamically assignable. The actual pinout has not been officially released, but the Wikipedia article seems reasonably accurate there. Lower-cost 3rd-party Lightning cables and accessories have arrived and users seem to have quieted down with complaints about the connector.

Last month my new iPad Air arrived and now I finally am in a position to comment on the actual user experience of the Lightning connector.

Build quality of the Apple cables and adapters is excellent – I bought an extra USB cable as well as the SD, VGA and HDMI adapters. I’ve never had one of the old 30-pin cables or adapters fail (one of them is 10 years old!) and the new ones look to be even more robust.

Inserting or removing  the connector gives strong positive feedback – there is a distinct “click” and it needs more force than required by the old connector. In fact, I had to get used to not simply pulling the iPad off; some hilarity ensued when I didn’t notice it was plugged in and attempted to walk away.

All in all, I can now confidently say that Lightning is a Good Thing™. 🙂

Update: Yet Another Follow-Up — this time about Lightning and USB3.

Photos licensed by Creative Commons license. Unless otherwise noted, content © 2002-2024 by Rainer Brockerhoff. Iravan child theme by Rainer Brockerhoff, based on Arjuna-X, a WordPress Theme by SRS Solutions. jQuery UI based on Aristo.