Apple2

A portable Apple //e emulator


NIB Format No Longer Supported

Why did Apple2 drop support for NIB files, when they were arguably superior to disk images in DSK format? While it’s true that the NIB format was superior to the DSK format, that was before we had the WOZ format. So what? What makes the WOZ format better? To answer to that seemingly innocuous question will require taking a look at various things that make up proper disk emulation, not the least of which includes various emulator disk file containers.

The DSK format is a byte-for-byte image of a 16-sector Apple II floppy disk: 35 tracks of 16 sectors of 256 bytes each, making 143,360 bytes in total. The PO format is exactly the same size as DSK and is also organized as 35 sequential tracks, but the sectors within each track are in a different sequence. The NIB format is a nybblized format: a more direct representation of the disk’s data as encoded by the Apple II floppy drive hardware. NIB contains 35 tracks of 6656 bytes each, for a total size of 232,960 bytes. Although this format is much larger, it is also more versatile and can represent the older 13-sector disks, some copy-protected disks, and other unusual encodings.

However, even though the NIB format is closer to what was actually stored on a floppy disk, it has serious shortcomings—the biggest of these is the lack of so-called “extra” zero bits (also sometimes called “timing” bits). These timing bits are used by the floppy disk controller to synchronize the reading of the bitstream on the disk; without these you could never be sure exactly what you were reading as reads to the disk are truly random as far as exactly where in the bitstream you will end up reading; since the disk spins independently of the Apple’s CPU.

Since that was clear as mud, here’s an example. Say you have a bitstream on the disk that looks something like this:

10110010010111101011001

When you start reading from the disk, the bytes you end up with can look very different depending on where you caught the bitstream. For example, say you caught the bitstream on the first bit. The bytes you would end up with would look like (periods represent trailing zero bits):

101100100 101111010 11001 --> B2. BD. C8

However, if you caught the bitstream on the third bit, you would end up with a different interpretation:

[10] 110010010 11110101 1001 --> C9. F5 9x

At this point, the reader is heard to say, “So what? Why should anyone care about those zero bits?” 

The short answer is because without them, you could never be sure that what you were reading was what was intended to be read. Basically, the disk drive mechanism needs a way to let the bitstream “slip” in a controlled way, and the timing bits are the way that the drive does it.

The usual method that the drive mechanism uses to send data to the CPU is by streaming them eight bits at a time; it also only starts the process of reading an eight bit stream by reading a “one” bit. As a consequence of this, if there are any extraneous zero bits at the start of the next eight bit chunk, it will skip over those until it reads another one bit. Thus, to synchronize data on the disk, one method to synchronize very quickly is to have a sequence of ten bits where the first eight bits are ones and the last two are zeroes. If the sequence is long enough, it will automatically put the data being read from the disk that follows it in sync, and thus reliable reads are possible.

And since that was also as clear as mud, here’s another example. Here is a bitstream composed of five ten-bit sequences as described above:

11111111001111111100111111110011111111001111111100

Let’s say when reading this sequence, we caught the sixth bit. We would end up seeing this:

[11111] 11100111 11111001 111111100 1111111100 1111111100 --> E7 F9 FF.. FF.. FF..

As you can see, even though we missed badly by starting in the stream at bit six, by the time we had read the third byte in the sequence we were already in sync, thanks to those trailing zero bits. Thus the importance of the timing bits.

But that still doesn’t answer the question of why dropping NIB support is now necessary. The short answer is that because the WOZ format is capable of representing what the NIB format could and much more since the WOZ format is a bitstream based format where the NIB format was a byte based format; by virtue of this, the two formats are hopelessly incompatible.

Why are they incompatible? The answer is that because the bitstream based format (WOZ) requires the emulation of the floppy disk controller’s Logic State Sequencer (or LSS for short), and because of this, it requires timing bits to properly decode the bitstream. Since the byte based format (NIB) lacks these, the LSS emulation can and will misinterpret the data from these kinds of images.

So the ironic consequence of this is that NIB format can no longer be properly supported. The irony comes from the fact that before there was a need for LSS emulation, NIB was the most accurate format you could get to represent the low level format of a disk, but now, with proper LSS emulation, it’s the worst format for representing a floppy disk. And the main reason for this is that NIB doesn’t contain timing bits, and has no mechanism to represent them—so when feeding them to the new LSS emulation, they will fail horribly for the aforementioned reasons. And since there is now a format that properly represents the bitstream on a floppy disk (WOZ), there’s absolutely no reason to keep NIB around or support it anymore. While it was a nice interim format to have around (when the emulation of the disk was “imperfectly perfect”), it now no longer has a place in disk preservation and/or emulation.