Solipsism Gradient

Rainer Brockerhoff’s blog

Browsing Posts tagged Mac

Fast update

No comments

So, my new Intel Mac mini is in and working. I bought the basic version; Core Solo at 1.5GHz, 512MB. It fits nicely under my iSub woofer. My iMac G5 controls it over Ethernet for remote debugging, and after some initial setup it doesn’t need mouse, keyboard or display.

I’ve already restarted debugging XRay II as a Universal Binary and universal versions of some other stuff will be out soon. Stay tuned…

OK, so here are the details on remote debugging; I’ve finished this phase of XRay 2 development and will in the next few weeks be fully available for pressing on with it.

The basic idea is that I have only PowerPC Macs, and since nobody I know in Brazil has received an Intel Mac (except for a couple of DTKs, which I didn’t want to use), the solution was to use Xcode‘s remote debugging capabilites, running my executable at a machine in the ADC Compatibility Labs. These are open at no extra charge to paying developers, but most of what I’ll detail would apply to any other machine.

Most of it is explained in the Xcode User Guide. Note that I used Xcode 2.2.1, but I think this facility has been available at least since 2.0. Click on the “Remote Debugging in Xcode” section in the left frame. First, however, send e-mail to adc.labs(at)mail.apple.com and ask for machine time, explaining for how long you need the machine; I asked for 3 days (thanks, Joe Holmes!). Basically, they’ll set up a newly formatted Mac with everything standard installed, including the latest developer tools. You should check that you have the same version, I suppose. They’ll have ssh and Apple Remote Desktop activated, and will send you the IP address, usercode and password. For illustration, let’s say the IP number is 10.1.1.1 (NOT an actual IP!), the usercode is “adclabs” and the password is “stevejobs”; substitute the actual values as appropriate.

In other words, all you’ll do there will be inside the default home folder “adclabs”. This user is also an administrator, so you’ll be able to use the password whenever needed for that. If you have a second Mac handy, you could rehearse first with that, of course; it’s what I did, as I’m normally not that handy with the Terminal. (Thanks, by the way, to John C. Randolph, Mike Ash and several others for helping me with details.)

First step is to generate a public and private key pair; you can skip this if you already have done so in the past. Open Terminal and type:

ls ~/.ssh/

If it lists a few files, among them one called “id_rsa.pub”, you already have a key pair. If not, type:

ssh-keygen -b 2048 -t dsa

This will take about a minute and then prompt you for a file path; type <return> to use the default path. It will then prompt you for a passphrase, twice. Don’t leave this empty and don’t use too short a phrase. You now should have the id_rsa.pub file in the ~/.ssh directory.

Second step is to open Terminal and type:

ssh adclabs@10.1.1.1

wait for the Password: prompt and type in “stevejobs”, or whatever password they sent you. You should see the normal Terminal prompt now, with a name like “CE-Lab-ADC-Compatibility-Labs-Intel-iMac” at the start of the line.

Now you’d better change the password to the same passphrase you used for the RSA key – yes, usually it’s recommended to use different passwords here, but that way you won’t have to remember which one to use where; it’s just for a couple of days, anyway. Type

passwd

and you’ll be prompted for the original password, then twice for the new password. Create a .ssh directory with

mkdir ~/.ssh

and log out by typing

exit

Next step is to transfer the public key to the remote Mac. To do this, at your local prompt, type

scp ~/.ssh/id_rsa.pub adclabs@10.1.1.1:~/.ssh/authorized_keys

it will ask for your password again, and transfer the file over. Now log in again with:

ssh adclabs@10.1.1.1

and if all is well, it won’t ask for your passphrase or password again, but just log in. Now restrict permissions on your key by typing

chmod go-rwx ~/.ssh/authorized_keys

Now you need to set up a local build folder. The trick here is that both machines should see your build folder at the same absolute path. There are several ways to achieve that; on a local network, you could have one of the machines serve the entire folder to the other, then use symbolic links to map the same path. However, I found that over a long distance it’s most efficient to have mirrored folders at both machines, and copy the contents over with an extra build phase. Here’s what I did.

At the remote machine, type

mkdir ~/build

which will create an empty build folder in the Home folder. Log out and close Terminal.

Now, on your local machine, you need to prep Xcode for what you’ll do. Double-click on your main project group and go to the “General” tab. Click “Place Build Products In: Custom location” and type in “/Users/adclabs/build” as the location. (Supposing, of course, that you don’t have a user called “adclabs”…)

Also check “Place Intermediate Build Ïiles In: Default intermediates location”, which probably will already be checked. Now click on your target and, from the Project menu, select “New Run Script Build Phase”. Make sure the new build phase is the last one, and enter this line as the script:

rsync -rz /Users/adclabs/build adclabs@10.1.1.1:/Users/adclabs

Finally, double-click on your executable in Xcode, and in the “Debugging” tab, select “Use Pipe for standard input/output”, check “Debug executable remotely via SSH”, and in the “Connect to:” field, type

adclabs@10.1.1.1

Now you’re ready. You’ll notice a delay of a few minutes while the last build phase transfers the files over, and on the start of a debugging run there’ll be several errors logged to the debug console, but eventually you’ll be debugging and single-stepping as usual, albeit more slowly. For GUI debugging, of course, you’ll have to use Apple Remote Desktop; I wish Apple would include a 1-user license for this in the Select package, as it’s rather expensive…

Have fun! I’ve tried to double-check most of this as I typed it in, please tell me if something didn’t work.

Update: fixed a typing error.

hmafra wrote:

He did it again! One of his takes now is the kernel thing. Speed, he says.

What he writes makes some of sense, like the part on cross-licensing agreement. I still don’t buy it, though.

I was about to comment on that.

I checked with some friends who know more about the kernel, and they say he’s completely wrong. In fact, there are two myths at work here. The first one says that Mac OS X uses a Mach microkernel, which is wrong. XNU, which is the Mac OS X kernel, is effectively monolithic as the whole BSD stuff runs right alongside the Mach stuff in the same context. The Mach code takes care of memory allocation and thread scheduling, the BSD code does most of the rest. None of the switching that would make a pure microkernel inefficient. Granted that there are some kernel functions which are slower than the equivalent calls in, say, Linux; but this just means that Mac OS X isn’t currently suited to huge server farms, and that Apple can tinker with this if necessary without switching kernels at all. In fact, they’re probably already doing this with the Intel versions of Leopard.

The second myth is that only Avie Tevanian was holding Mach in place by sheer orneriness, and that now that he’s gone, everybody will heave a sigh of relief, throw Darwin out, and shoehorn Linux (or even XP) into its place. That too is completely wrong. Bertrand Serlet has been in charge of the OS policy for at least two years now. And consider that XNU, because of the Mach component, is well-suited to scale to a larger number of processors. And consider that Intel is coming out with new chips that supposedly will scale well to 4, 8 or even more cores…

The idea of Leopard implementing the Windows API is, at first look, interesting. (Let’s discard the misinformation about “Microsoft saving Apple”, and that the cross-licensing included the Windows API.)

After all, Mac OS X already has several APIs for writing applications. BSD with X11, Carbon, Cocoa, Java, and so forth. Why not an additional one? Well, it’s possible in theory. In fact, the WINE people are working on such a thing. However, why should Apple make it too easy to move applications into Mac OS X? Such apps would never be full-class citizens, the appearance would be awkward, drag&drop would probably be impossible… no, virtualization is the way to go. Running Windows inside a secondary window would also be a constant reminder of which environment is the native one, which is more in Apple’s interest.

Tempora Mutantur

No comments

Yes, the times sure are changing. Today I even found myself largely agreeing with a Paul Thurrott article:

Since the euphoria of PDC 2003, Microsoft’s handling of Windows Vista has been abysmal. Promises have been made and dismissed, again and again. Features have come and gone. Heck, the entire project was literally restarted from scratch after it became obvious that the initial code base was a teetering, technological house of cards. Windows Vista, in other words, has been an utter disaster. And it’s not even out yet.

Doesn’t that sound a lot like the ill-fated Copland project?

Sadly, Gates, too, is part of the Bad Microsoft, a vestige of the past who should have had the class to either formally step down from the company or at least play just an honorary role, not step up his involvement and get his hands dirty with the next Windows version. If blame is to be assessed, we must start with Gates. He has guided – or, through lack of leadership – failed to guide the development of Microsoft’s most prized asset.

Perhaps Microsoft’s most serious mistake, retrospectively, was that Gates and Ballmer were too compatible. Ballmer should have driven Gates out of the company in the 80s, then Gates should have matured elsewhere, only to return triumphantly in the 90s with new, cool technology, in the nick of time to save the company that was going broke after Ballmer in turn had been pushed out… sounds familiar, too? icon_smile.gif

Now here’s another interesting part:

Here’s what you have to go through to actually delete those files in Windows Vista. First, you get a File Access Denied dialog (Figure) explaining that you don’t, in fact, have permission to delete a … shortcut?? To an application you just installed??? Seriously?

…What if you’re doing something a bit more complicated? Well, lucky you, the dialogs stack right up, one after the other, in a seemingly never-ending display of stupidity. Indeed, sometimes you’ll find yourself unable to do certain things for no good reason, and you click Allow buttons until you’re blue in the face. It will never stop bothering you, unless you agree to stop your silliness and leave that file on the desktop where it belongs. Mark my words, this will happen to you. And you will hate it.

This is exactly what happened to me when I, a few months ago, had to install Windows XP for my wife’s business (to run a proprietary vertical app, if you must know). I tried to set up an admin account for myself and a normal user account for the receptionist. This being the first time I’d ever seen XP, I did them in the wrong order… and then tried to organize the desktop and taskbars. In the end I had to wipe and reinstall everything. It seems Vista won’t be any better, sadly.

Thurrott goes on to complain about glass windows and the Media Center UI, which I can’t comment on myself. But, here’s a thought:

    One of the “stealth” features of Apple products is that more and more people are being subconsciously educated as to what constitutes good design.

We certainly aren’t that used to columnists criticizing details of the Windows UI; specialists like Don Norman, sure, but not mainstream columnists. Personally, I’d about given up commenting on bad UI to Windows users… they either just emit a blank “huh?” or say somewhat ruefully “well, that’s what computers are like, you know”. Not that the Mac UI is itself perfect – it’s still a work in progress – but at least we developers, and many people inside Apple, deeply care about producing good UI. (Here’s one example among hundreds.) If that attitude is now leaking out to the general public, so much the better.

Thanks to John C. Randolph for pointing out that article.

Rainer Brockerhoff wrote:

…Indeed, the usual hardware developer notes which used to come out a few weeks after a new PowerPC Mac was released are still absent for all Intel Macs – not that these notes ever went into chip-level detail of any Apple hardware.

The developer note for the Intel iMacs just came out (along with a few others). There are some interesting tidbits – for instance, EFI Flash memory size is 2 megabytes – but no mention of the TPM chip, as expected. Also, all the recent notes are very terse and go into less detail, even for PowerPC machines…

Rainer Brockerhoff wrote:

…But the main advantage is that the OSes for the virtual machines can be simplified. All the tricky little kexts and drivers you see on current PowerPC Macs will be substituted by one or two “generic” versions which will interface to the virtual peripherals simulated by the hypervisor, and the actual machine’s peripheral drivers will be in EFI or on the cards themselves.

One variation of this idea would be for Apple to sit down with game developers and define a basic Intel VM; just enough to see a drive partition, input devices and the video card. This would make porting very easy, as most games try to take over as much of the machine as possible anyway, and you’d have optimum performance.

Well, people sure have short memories.

I’ve commented several times before (for instance, here [8 months ago!], here, here, or here), that Apple’s Intel Macs contain an Infineon TPM chip. This from the very first developer transition kit, up to the latest released machine.

See my first link for details, and John Gruber‘s excellent analysis.

Today I was surprised to find several indignant articles pointing out that:

It looks like Intel has embedded “Trusted Computing” DRM protection in its Infineon chip and forgot to tell people.

and

…nobody wants to admit that the Intel Macs currently on sale have a TPM chip.

This is not only old news, but has been extensively photographed and discussed. It’s well-known that Apple uses the TPM chip in increasing degree in Mac OS X 10.4.x, to prevent people installing it on generic PCs, and it’s certain that Mac OS X 10.5 will also do so.

Does Apple come right out and say so? Admittedly not. Indeed, the usual hardware developer notes which used to come out a few weeks after a new PowerPC Mac was released are still absent for all Intel Macs – not that these notes ever went into chip-level detail of any Apple hardware. At the same time, Apple withdrew publication of a few kernel source files for Darwin, the open-source base for Mac OS X. Both facts demonstrate that Apple’s security locks are still in flux and may change extensively in the near future. Will all these things be documented in the future? Hard to say. If the TPM chip’s encryption is sufficiently strong, they could be documented without defeating Apple’s purpose; but keeping details hidden always helps.

Is this evil? Well, depends on your definition of course. As Gruber points out, people who are incensed about this should also boycott Linux for its support of several TPM chips, including Infineon’s. Certainly, Apple has a right to enforce its current license terms which state that Mac OS X should run only on Apple hardware.

But what else will the chip be used for in the future? As I’ve repeatedly wrote here before, using it for DRM protection of media – which is what most of the critics claim to fear – isn’t likely. Mostly because, if you do the math, Intel Macs will be a minority for years and any such protected media would either not work at all or be open on PowerPC Macs, of which there are several tens of millions still in operation.

What’s far more likely – and we’ll know for sure in August – is that the TPM chip will be used to boot a trusted hypervisor at the EFI level. Apple has even patented a scheme to run tamper-resistant code and more than one OS at once. From the wording it’s obvious that the TPM chip is used for that:

In one embodiment the system comprises a processor and a memory unit coupled with the processor. In the system, the memory unit includes a translator unit to translate at runtime blocks of a first object code program into a blocks of a second object code program, wherein the blocks of the second object code program are to be obfuscated as a result of the translation, and wherein the blocks of the second object code program include system calls. The memory unit also includes a runtime support unit to provide service for some of the system calls, wherein the runtime support unit is to deny service for others of the system calls, and wherein service is denied based on a tamper resistance policy.

So, what I think likely is that the machine will boot into the trusted hypervisor. This will be encrypted into firmware and decrypted, and checked against tampering, by the TPM chip. Once this is running it will show a screen like the Boot Camp boot selector, with one important difference: you’ll be able to select more than one OS to boot up. All of them, including Mac OS X itself, will run inside a virtual machine.

What’s the advantage? Of course all OSes will run at near-native speeds if nothing else is running at the same time – the hypervisor’s overhead will be negligible. In fact, this scheme has been used and refined on mainframes for decades, where it is assisted by hardware; now that Intel’s Core processors have hardware virtualization support, it should be easy to do likewise.

But the main advantage is that the OSes for the virtual machines can be simplified. All the tricky little kexts and drivers you see on current PowerPC Macs will be substituted by one or two “generic” versions which will interface to the virtual peripherals simulated by the hypervisor, and the actual machine’s peripheral drivers will be in EFI or on the cards themselves. This reduces disk and RAM usage at the expense of performance, although this shouldn’t be a problem except for games – but then, as I said below, hardcore gamers will prefer to boot directly into “the most popular game loader” anyway.

Another extremely desirable gain for Apple will be that they’ll only have a version of Intel Mac OS X that runs on this trusted virtual machine. To get this running on a generic PC, people would have to reimplement the entire Apple hypervisor too, write drivers etc., and even this would be easily defeatable by the TPM chip. Still, it’s a major architectural change and for that reason we’ll only see this in Leopard.

What boots it?

No comments

OK, people have asked me to comment on Boot Camp Public Beta.

If you’ve been away for the last few weeks, the $13K+ prize to make Windows XP boot on an Intel Mac has been won by two puzzle addicts. Granted that their solution is complex to implement and runs slowly due to the lack of proper video drivers (and others), but it’s still impressive. My Intel Mac mini hasn’t arrived yet, so I can’t speak from firsthand experience, but it seems it overlays just enough legacy BIOS responses on the Mac’s EFI to interact with an complementarily modified Windows XP.

Well, Wil Shipley and others donated money to that effort, and this seems to have convinced Apple, about a week later, to make “Boot Camp” public. It consists of three parts: a firmware upgrade that puts the (optional) legacy BIOS support module into the firmware, a small utility that allows nondestructive repartitioning of an Intel Mac’s hard drive, and a CD containing XP drivers for most (though not all) Intel Mac peripherals. It’s a beta, and some things don’t work yet, but it’s much smoother than the hacked-together version. In effect, the Intel Macs can now be dual-booted with Windows XP; also, people report progress in booting some Linux variants, and Vista support may not be impossible anymore. Ah yes, Apple has also stated that something like this will be a part of Leopard aka Mac OS X 10.5, which will be demoed at the upcoming WWDC and may be out around the end of the year. And AAPL stock shot up nearly 10% over the next two days…

So much for the facts. Interpretations are diverse; in fact, I haven’t seen so many divergent comments since Intel Macs were announced last June.

As usual, after a couple of days, Gruber, Siracusa and a few others posted excellent analyses of the situation. However, much of the immediate commentary was – let’s charitably say – weird. Immediate doom has been predicted for Apple first and foremost, as well as for Microsoft, for Dell, and for software developers. Let’s look at that last idea first.

Most non-developers are saying that, obviously, Mac developers will now fold up and die, or migrate to become Windows developers in droves, or (if they support both platforms) discontinue Mac versions of their products. After all, all Mac support questions can now be answered by “boot into XP”. And Windows is where the money is, right?

Wrong. Let’s check each type of developer separately. There are the two big ones: Microsoft and Adobe. Microsoft obviously won’t close the Macintosh Business Unit (MBU); I hear it’s their top division in terms of income per employee. Obviously, most Mac users want Mac versions of their applications, even if they have to be from Microsoft. The same goes for Adobe products; most of them were, originally, ported from the Mac to Windows anyway. And even if Adobe is having a hard time porting their stuff from CodeWarrior to Xcode, eventually they’ll do so.

At the other end of the spectrum are small developers like myself, up to small 3- or 5-person shops. Very few of those are multiplatform. I can safely say that an overwhelming percentage are Mac-only because developing on the Mac, for the Mac, is enjoyable and lucrative. Read Wil Shipley’s interview and his WWDC Student Talk and see what I mean. Here’s a pertinent part:

I love the Mac user base because they tend to be people who are into trying out new software and recommending it to each other and giving the little guy a chance. Windows users have demonstrated, ipso facto, that they do not believe in the little guy.

The two types of Windows users I’ve identified at my café are:

a) I use Windows to run Word and Excel and browse the web (and read e-mail in my web browser), and

b) I’m a programmer and I spend all my time in a Windows IDE or hacking around with my system.

The problem is that market (a) already has all the software they think they’ll ever need, and clearly isn’t into looking beyond what they already have or they’d have noticed they could do all that they currently do, and more, but much easier, on a Mac. And market (b) is too small for me to aim any software at it.

No doubt most non-developers (and Windows developers like (b) above) believe that developers mostly hate their jobs and just do whatever distasteful thing is necessary to maximize their income. Well, it’s not really that way; granted that many of us have to work to pay for the groceries, and Mac-related jobs are not really plentiful (yet!), but many .NET slaves spend extra hours at their home Macs to write really cool software.

In other words, we write for the Mac because it’s satisfying and would do it even for free, all day, every day (assuming the grocery problem to be solved somehow). Would I migrate XRay to Windows? No way. The tools aren’t there, the APIs are uncool, and the Windows community – well, as far as I can tell, there’s no Windows community at all. And regarding the market size, better a small fish in a small pond, and all that.

So what about the middle-sized software companies? Here the situation may not be as clearcut. It depends a lot on company culture, I suppose. Are the people in charge active Mac users but also target Windows just because, well, they might sell a lot of copies over there? Or are they primarily Windows developers which also have a Mac version championed by a couple of vocal believers among their programmers? It could be either way, and only time will tell. But should some of the latter type close out their Mac support, they might have done it anyway sooner or later.

Now, game developers are a special case. Discounting for the moment some diehard Mac-only game developers, reactions among the multiplatform gamers have been very cautious. After all, a game user is the person most likely to dual-boot into Windows just to run the very latest game at full speed – though such a fanatic is still more likely to have a dedicated, souped-up PC just for that purpose. So, widespread availability of Boot Camp might, really, lead some game companies to neglect Mac versions, purely for economical reasons.

Update: Ouch, I forgot to put in John C. Randolph’s comment on this:

Apple now lets you use the most popular game loader!

…and he’s sooo right! icon_biggrin.gif

Stay tuned for more comments on this…

Now and then I read complaints about Xcode on blogs and mailing lists. It’s come a long way but some parts are still slow and cumbersome, granted. One of the complaints – which usually comes from Java or Windows C++ migrants – is that Xcode has no refactoring aids. Some people even publish workarounds.

So what is this refactoring thing anyway? According to Wikipedia:

Refactoring is the process of rewriting a computer program or other material to improve its structure or readability, while explicitly keeping its meaning or behavior…

Refactoring does not fix bugs or add new functionality. Rather it is designed to improve the understandability of the code or change its structure and design, and remove dead code, to make it easier for human maintenance in the future. In particular, adding new behavior to a program might be difficult with the program’s given structure, so a developer might refactor it first to make it easy, and then add the new behavior.

I’d tend to agree with that, up to a point. I usually refactor when I reach a dead end in the software’s structure, that is, when the current structure won’t allow me to proceed implementing what I want to implement. Or – probably the same thing, essentially – when I find myself implementing things I don’t want to implement anymore.

But my tendency (see fractal programming) is to do it in the reverse order; I write some code that does new stuff in a new way. Then I migrate lots of old code into the new scheme, often rewriting it radically if necessary, or throwing entire blocks away. (Well, not literally at first; I prefer to comment such blocks out or move them into a “dead code” file for later reference.)

Now, the aforementioned migrants usually don’t see it that way. Rather, they want some automation to make the process easier:

An automated tool such as a SCIDs to help you do might work like this:

– I have a method which has some code that I would like to pull out into its own method.

– I highlight the offending code.

– I select Extract Method from a popup menu

– The RefactoringBrowser asks me to name the method and automatically creates it and inserts the highlighted code.

– In the current method, the highlighted code is replace by an invocation to the newly created method.

All very nice, but it presumes several things which I don’t see coming to Xcode (at least not to the Objective-C parts):

– You have a very regular, structured style of coding that conforms to standards the “RefactoringBrowser” understands.

– You always use the standard refactoring methods, such as expanding, collapsing, pulling out, pushing in, whatever.

– All source code in your project has been previously parsed and stored in the SCID (source code in database), so the browser and refactoring software have a perfect understanding of your code.

This is perfectly possible (or at least I’m told it is) in Java and perhaps in C++ – though I’m skeptical about the latter. I was astounded when a friend, who was qualifying to some Java certificate or other, asked me to have a look at his source code. A quite trivial program was expanded to several dozen source files, consisting of literally hundreds of small methods that differed from each other only by name and a few characters or line. No doubt everything was set up very logically and hierarchically and according to whatever standards a certified Java programmer must obey, but… it was completely illegible by my (admittedly eccentric) standards. It was code only a SCID could love.

So, suppose my friend decided to refactor his code. Just renaming a few classes must necessarily entail profound changes in all source and project files. Not only must the filenames themselves be changed, but all mentions of this class must also be changed. Wait, don’t we have global search & replace for that…?

But, of course, renaming classes or methods is trivial. Maybe it’s suddenly obvious that you need to push some methods off into a subclass, or pull them up into their superclass. Wouldn’t it be nice to have this done automatically?

Well, not really. First off, inheritance is much less used in Objective-C – or at least, I use it less than I used to do in my C++ days (I refuse to learn Java). Runtime binding, introspection and categories mean you usually don’t have to subclass more than one or two deep from the standard Cocoa classes. In fact, I believe I just today went to the third subclass level for the first time. So, automating such a superfluous process makes little sense.

Second, remember that Objective-C is just a small superset of C (unlike Java and C++ which are C-like languages). And that Mac OS X is Unix-based, with many headers pulled in from heterogenous sources. This means, of course, that all the old crufty things of the old C days are still there… pointers, pointer casting, weird #defines, other tricks – you name it. And all of this is liable to be #included into your poor unsuspecting code if you do any work at all with Carbon or BSD APIs, as most non-trivial applications need to.

In other words, I don’t believe it’s possible to feed all of this into a SCID and expect it to behave rationally; of course the gcc compiler has to make sense of all this eventually, but I seriously doubt it could be easily refactored to go back and forth from its internal representation to the source code while it’s being edited. It’s still, essentially, a batch processor.

Suppose someone pulled this transformation off. Suppose all the old C headers were tamed to make them compatible. Suppose we had everything in a hierarchical, intelligent, refactoring browser/editor. Now what?

It may be some congenital deficiency in my own neural wiring, but I can’t recall ever refactoring my code twice the same way (except for that trivial class/method renaming). So again, not much for an automated “RefactoringBrowser” to do.

Well. All this to, finally, say that I’ve been stuck refactoring some of my code – specifically, the XRay II file system browser back-end… and of course, no automation would have helped me. Nor would I have trusted it to do anything like what I want.

This is perhaps the third or fourth version of it, and it’s easily the most complex refactoring I’ve ever done. Unfortunately there are no intermediate steps. 20 days ago everything was compiling and running nicely (except, of course, for the problems that led me to this refactoring attempt). Then suddenly it’s like open-heart surgery. Nothing compiles and the number of error messages is so great that gcc throws its metaphorical hands up and goes off to sulk in a corner. I can’t close the patient up again until everything has been put back into place – even if it’s not the same place. And it’s a lot of information to hold in one’s head at the same time. I suppose I must get a second monitor, but that’s not practical at this moment.

And the availability of powerful time-sinks like the Internet means that it’s almost impossible to summon the necessary concentration to do the surgery in a single run. I’ve made serious progress over this weekend by the simple expedient of staying overnight with some friends who don’t have an Internet connection (or, even, a phone line). Still, sometimes it’s necessary to read and write e-mails, chat, even write long posts about refactoring… icon_wink.gif

Even so, I hope to get past this obstacle during the next few days and write a little about actual results. Turns out I learned a lot about in the process. More as soon as possible, then.

Photos licensed by Creative Commons license. Unless otherwise noted, content © 2002-2025 by Rainer Brockerhoff. Iravan child theme by Rainer Brockerhoff, based on Arjuna-X, a WordPress Theme by SRS Solutions. jQuery UI based on Aristo.