Oldskooler Ramblings

the unlikely child born of the home computer wars

Archive for the ‘Digital Video’ Category

An informal comparison of intermediate editing codecs for Windows platforms

Posted by Trixter on April 2, 2017

Last year I volunteered to record an entire weekend’s worth of vintage computer talks.  Unsurprisingly, this also involved editing an entire weekend’s worth of vintage computer talks.  All of the footage was 1080p60, and was shot or delivered compressed (a mixture of AVCHD and H.264 MP4).  While this saves space, it is not always fluid to edit with, as compressed codecs arrange frames into groups that are highly dependent on each other.  The end result is that seeking around and cutting highly-compressed footage can feel sluggish even on extremely powerful systems.  My system was beefy for 2012, and can still surprisingly hold its own in 2017:  A 6-core/12-thread Core i7-980X with 24G of RAM, and 4TB of spinning-disk storage made up of 3x2TB disks in a RAID-5 array (capable of delivering up to 700MB/s sustained read speeds at the outer edges of the platter) with a 4th hot spare.  Despite the high specs, editing 1080p60 H.264 footage last year was sluggish to the point where I considered using an intermediate codec.

A quick primer on intermediate codecs

Intermediate codecs are used when you want to transcode your footage to something that is much easier for the computer to decompress, making it much faster to edit with.  There are two basic types, lossless, and lossy:

Lossless codecs exactly represent each original pixel in the source, and can be used interchangeably with the original footage through any number of processing or compression passes.  (Think of them as running the footage through PKZIP or 7-ZIP, but using more lightweight algorithms more suitable for decompression speed.)

Lossy codecs throw more information away, but do so in a manner that looks the same to human eyes.  They don’t match up with the original source pixel for pixel, but most people wouldn’t be able to tell the difference.  The resulting file sizes are much smaller than lossless codecs, but because some information is thrown away during the compression phase, you have to be careful running such footage through multiple processing or compression generations as the image could degrade unacceptably.  (For the old-timers: This is like what happens when you make a copy of a copy of a copy of a videotape: The end result is noisy, washed out, and barely watchable.)

Compressing into an intermediate codec takes time, so it is usually done during the ingestion process (such as with DaVinci Resolve or Adobe Prelude), or in a batch process overnight, or both.

Our test case

I decided at the time to pick a 10-minute piece of 1080p60 footage and transcode it to every intermediate codec that met the following criteria:

  1. Worked in Windows without Quicktime (Quicktime for Windows was discontinued in 2016 due to security flaws).  This eliminated ProRes and Avid’s DNxHD.
  2. Over 5 years old, compatible with Windows 7 or above, and relatively battle-tested
  3. Evidence of recommendation online by video content producers (ie. something more professional than, say, people doing anime music videos)
  4. Worked in Premiere Pro without crashing or odd behavior
  5. Didn’t demand more than 200MB/s out of the I/O subsystem to maintain 60fps playback (this was more to keep filesizes down than anything else)

Additionally, since my source footage was not high enough quality to require HDR-level processing, I was realistic and considered codecs that had a YUV 4:2:2 configuration (and eliminated codecs that could only do 4:2:0).  This led to the following codec list:

To provide some interesting comparisons, I also included the following:

  • YUV 4:2:2 8-bit uncompressed, to see what was possible given a fast I/O subsystem
  • Cinepak (1991), possibly the fastest useful single-core decompression codec ever made
  • HuffYUV (2000), the first popular free lossless codec for Windows

After my 10-minute 1080p60 sample was compressed into these codecs, I then defragmented my RAID array using mydefrag with a script that put all of the codec comparison files at the fastest area of the array (the beginning).  I then measured decompression speed using VirtualDub’s File->Run Video Analysis Pass feature, then measured I/O and CPU usage using Windows 7’s Resource Monitor.

Before reading on, I want to stress that the above criteria were the only factors considered.  There are many other things to consider if you have different production targets or workflow needs, such as 10-bit or 12-bit color depth, or iterative recompression stability.  These vary from codec to codec and are not discussed here.

The results

Here are the raw results (you might need a non-mobile browser to see this table correctly):

Codec Size in bytes Percent size of uncompressed Data loss Visual quality loss Decompression Frames Per Second MB read per second Decompression %CPU usage (6-core 12-thread i7 980X)
UTCodec 4:2:2 8-bit 35,859,227,004 24% lossless 1 164 367 39
Cinepak 8,951,457,910 6% lossy 3 129 57 9
Cineform (Medium HD) 11,872,682,448 8% lossy 2 64 51 12
Cineform (High HD) 13,293,520,304 9% lossy 2 62 57 12
Newtek SpeedHQ 4:2:2 9,763,299,366 7% lossy 2 55 24 8
Blackmagic Design MJPEG 10,146,513,708 7% lossy 2 38 28 16
YUV 4:2:2 8-bit uncompressed 149,270,431,900 100% lossless 1 85 704 4
Lagarith 28,375,743,340 19% lossless 1 29 41 11
Grass Valley HQX Lossless 35,888,010,830 24% lossless 1 33 75 9
Grass Valley HQX Offline 3,988,818,666 3% lossy 3 90 19 8
Grass Valley HQX Online Standard 7,854,558,164 5% lossy 2 74 38 8
Grass Valley HQX Online Fine 9,833,655,952 7% lossy 2 69 45 8
Grass Valley HQX Online Superfine 27,810,744,334 19% lossy 2 48 86 9
HuffYUV (FFmpeg variant, left, adaptive huffman) 35,918,114,616 24% lossless 1 44 99 12

From this data, I eliminated codecs that were not capable of playing back 1080p60 footage at the bare minimum requirement of 60 frames per second.  I also eliminated Cinepak (which was only included for teh lolz) and Grass Valley HQX Offline, since the visual quality of these were unacceptable and I did not want to edit with proxies (I was not in the field on a laptop, but was on my actual editing desktop system).  I also eliminated YUV 4:2:2 uncompressed, because the storage requirements of transcoding everything to an uncompressed format was not practical.

From the remaining data, I concluded the following:

Playback performance (best to worst)

  1. UTCodec 4:2:2 8-bit
  2. Grass Valley HQX Online Standard
  3. Grass Valley HQX Online Fine
  4. Cineform (Medium HD)
  5. Cineform (High HD)

File size (best to worst)

  1. Grass Valley HQX Online Standard
  2. Grass Valley HQX Online Fine
  3. Cineform (Medium HD)
  4. Cineform (High HD)
  5. UTCodec 4:2:2 8-bit


For over a decade, I assumed that Cineform was the gold standard of intermediates, and was indeed the codec I used whenever I needed an intermediate workflow.  Recently in an online discussion, a colleague who is a professional cameraman and editor extolled the virtues of Grass Valley HQX.  HQX was not in my original comparison, so I added it and re-ran all of these tests in the same hardware configuration.  I was surprised to see that Grass Valley HQX was a hair better than Cineform in all areas.  I will definitely give it a spin for my next project.

To be absolutely fair, Cineform is (still) no slouch, and has served me very well over the years.  You could do much worse than either Cineform or Grass Valley HQX; either of them should suit your needs for an intermediate codec on Windows.

If you have a multi-core processor and lots of free disk storage, UT Codec remains the fastest lossless codec available on modern hardware.  However, evaluate your project needs before committing to it, as not every project needs 100% lossless compression.

Finally, if you only work with 24p/30p material, just about any of these codecs will serve you well.

Update: I’ve recently found that Cineform performs much faster if you can handle the quality the “Medium HD” preset provides, which can exceed the playback rate of GV HQX.  But they’re both good to use.

Posted in Digital Video | Tagged: | Leave a Comment »

BASIC Memories

Posted by Trixter on March 15, 2017

I contributed a few segments to The 8-Bit Guy‘s retrospective on BASIC that covered my thoughts and memories on growing up with BASIC during the home computer revolution.

Making quality video is difficult!  I had to perform several different takes before I was comfortable, and I screwed up the focus, and I needed better lighting.  And I’m fat.  But overall it went ok.  Except for being fat.

I envy those who can do YouTube videos full time, since it seems like a lot of fun.  I’ve collected over a hundred topic ideas for videos I’d like to do someday, but if I’m being realistic with myself, I won’t have the time.

Posted in Digital Video, Entertainment, Programming, Vintage Computing | Leave a Comment »

Sony Xperia Z3v impressions and workarounds

Posted by Trixter on January 10, 2015

The Xperia Z3v is a very odd hybrid of a phone that is being marketed as a flagship for the current generation of smartphones; it was released in October 2014 and is a Verizon exclusive (other carriers have the older Z2 or Z3).  There is a nearly criminal lack of coverage in the media for this phone, so I thought I’d rectify that with my thoughts on the phone after two months of use.  First, some background:

We switched the entire family over from Sprint to Verizon (more expensive, but you get what you pay for) and part of the terms of the switch was that we all get new phones.  As I was a long-time Samsung customer (Epic 4G, Galaxy S4) I was planning on going with the S5, but wanted a few things the S5 couldn’t give me, like stereo front-facing speakers.  After reviewing everything Verizon offered that met my requirements, I decided to try the Xperia Z3v under the agreement that I could return it after 14 days if I wanted to switch to another phone.  Because the phone is best-in-class in a few areas, I’ve decided to keep it, accepting that a few aspects of the phone need workarounds.

The Z3v is a combination of the Z2 (larger, thicker body; slower CPU) and the Z3 (camera, screen).  It’s a frankenphone that only Verizon offers.  Let’s start by describing the basic features of the phone that drew me to it:

  • 20 megapixel camera sensor
  • 1080@60p and 4K@30 video recording
  • IP65/68 rating (dustproof, waterproof up to 5 feet for up to 30 minutes)
  • Front-facing stereo speakers
  • Dedicated physical camera shutter button
  • Wireless charging

(It has more features than these obviously, like remote PS4 playing, but these are the only ones that interested me.)  Sounds awesome right?  Well, it mostly is.  Based on my experience, here’s what “mostly awesome” means:


The camera.  As a point’n’shoot, the Z3v is one of the best I’ve ever had.  The 20mp sensor, coupled with firmware borrowed from the Sonty CyberShot line of cameras means that it shoots great automatic pictures.  In default “auto” mode, which is what you get if you press the camera shutter button to wake up the phone and go straight to the camera app, it uses the 20MP sensor to oversample the scene and provide both free HDR shots and stabilization.  It is smart enough to turn off stabilization if it notices the camera is on a tripod, and tells you it is doing so with a small status indicator.  Actually, it’s smart enough to do all sorts of things that I won’t bother mentioning here — just know that the Z3v is good enough that I don’t carry a dedicated camera any more.  Is it a replacement for a DSLR?  No, of course not.  But it is definitely a replacement for a sub-$300 point’n’shoot.  The shutter button even performs a press-half-to-focus-lock, press-full-to-shoot function.

4k video.  Being able to shoot this is not terribly practical, but it does work, and you do see some additional fine detail that you don’t see in 1080p mode.  4K mode is useful if you can’t decide on the framing of a 1080p shot and you want the ability to crop without losing detail.  It works best outdoors; there’s no point in using it in low light.

It’s waterproof.  Will I be shooting underwater?  No.  Will I be secretly grateful that my accidental fumble of the phone into the toilet won’t completely destroy it?  Absolutely.

Active noise-canceling for free.  This feature isn’t advertised heavily, but if you purchase the custom “Sony Digital Noise Cancelling Headset (MDR-NC31EM)” earbuds for $45 and stick them in the phone, the Z3v will 1. recognize they are in, and 2. enable active noise-cancelling.  This works because the earbuds have little microphones on them that sample the environment, which the phone then generates the inverse waveform for in certain bands and mixes that into the output.  While the earbuds aren’t the most comfortable things to have in for more than an hour, the features does work well — better than noise-isolation earbuds which I’ve used for a decade — and I’m thankful to have them on my commute.  I haven’t noticed any distortion, but I listen to mostly spoken material on my commute.

Wireless charging.  With a cheap Qi charger, this simply works, which is great because the USB/charging port is normally behind a waterproof door you have to keep opening and closing when connecting via that port.

Battery life.  The battery life on this phone is simply amazing given what the phone is capable of.  I can go two days on a single charge, and that includes 3-4 hours of screen-on time per day.  If that’s not good enough for you, there are two classes of power-saving modes with multiple levels of customization, the latter of which shuts down everything except calling and texting and can stretch a single charge up to seven days.  Geezus.

Sounds too good to be true?  Well…


The 20MP mode is disappointing.  The camera normally shoots everything at 8MP.  If you want the true resolution of the sensor, you can enable 20MP in “manual mode”.  It works, and you have some customization over the shot, but it’s disappointing because the sensor and lens are small enough that there is no appreciable extra detail captured in the 20MP shot.  I’ve done comparisons with the phone on a tripod in a lot of light and there was just no advantage: I scaled the 20MP shot down to 8MP in photoshop and it didn’t look any better; I did a 100% crop of a few locations in both images and the 20MP didn’t have any more detail, mostly just fuzzier larger sections.  So, it’s sort-of useless, and I don’t use it.

The phone is slippery.  The front and back are glass, and the edges are a rougher plastic material.  The edges aren’t enough for me to keep a good grip on the phone at all times.

The native camera apps offer little customization.  If you want to shoot video under odd circumstances, or use the camera on a tripod to take nice stills, the native camera app — even in manual mode —  lacks a few things that make it difficult.  There’s no true manual focus or manual white balance.  You can pick focus “types” and white balance “types” but the focus and exposure are always auto-adjusting.  And the 4K video mode offers no customization whatsoever; it’s 100% auto-adjust.

60p isn’t really 60p.  For some inexplicable reason, the camera shoots video slightly slower than 59.94 or 60fps which are the only framerates considered 60p broadcast standard.  Out of several tests, the resulting videos had variable framerates, all nonstandard, like 59.32 and 58.86.  This leads to slightly jerky video over longer lengths of time, and can cause issues editing in post.  One video I shot and uploaded directly to YouTube without editing shows up as “1080p59”.  (The 30p video modes were all 29.97 like they’re supposed to be, so that’s good at least.)

4k video mode overheats the phone.  Seriously.  The first time you use it, you’ll get a pop-up saying that if the camera overheats, your video will be stopped, saved, and then the camera will shut down to cool off.  Sure enough, it does all that after about 5-7 minutes of 4K video shooting.  This, coupled with the 60p framerate issue noted above, seems very bubblegum-and-shoestring to me.  But, good luck getting those fixed, because:

Frankenphone = orphan.  The Z3v was the result of a partnership between Verizon and Sony; it is a hybrid of the Z2 and Z3.  As a result, neither company will fully support the phone.  I’ve tried to report the firmware bugs noted above to both companies, and both companies tell me to contact the other company.  Sony tells me that Verizon supports the phone exclusively, and Verizon tells me that any firmware bugs in the camera are the responsibility of the manufacturer.  Which really sucks, because:

Playing with the alarm volume can lock up the phone.  If you adjust the volume of an individual alarm a few times, then this hilarious thing happens: When the alarm time comes, it does not make noise but instead locks up the phone.  You have to mash and/or hold down the power button to get out of this state until the phone eventually reboots.  I was late to work one day because of this.  It would be nice to be able to report this bug to someone, but oh well.

The front-facing speakers aren’t as loud or clear as they could be.  My son used to have an HTC One M7 and his audio was louder and clearer than on the Z2v despite the hardware being almost 2 years older.  It’s not bad; just don’t assume it’s a replacement for good headphones.

The Stock youtube app doesn’t allow pre-downloading.  This feature was removed by YouTube at some point, angering hundreds of thousands of commuters, myself included.  I used the stock YouTube app on my Galaxy S4 for a full year for this reason so I could predownload videos in my “Watch Later” list to view on the train, and the Z3v app is fully updated and doesn’t allow caching of videos.

These were initially very big disappointments and I almost returned the phone because of them.  After some research, here’s how I mitigated them:


Slippery: The Verizon store had a $20 cheap flexible plastic case that I put on it just to make it less slippery until I found something else.  I haven’t found anything else, so it’s still on there.  I tried carbon fiber stickers; while they looked nice, all they did was make it more slippery.  Trying to search Amazon or other stores for “Xperia Z3v case” doesn’t work well because you keep getting results for the Z2 or Z3, both of which have different dimensions than the Z3v.

Lack of manual camera options:  I found that OpenCamera works with my phone and supports locking focus, white balance, and exposure.  This allows me to shoot videos in very odd conditions, such as a reflective glass computer CRT that emits colored light.  It doesn’t support the 60p or 4k modes of the phone because those are manufacturer-specific and have no exposed API.

Odd 60p videos:  moveyourmoneyproject.org created this script to “fix” 60p videos so that they can be edited in post-production without causing problems:

A = FFAudioSource("MOV_0001.mp4")
V = FFVideoSource("MOV_0001.mp4")
AudioDub(V, A)

# Force compliant framerate (will adjust audio rate to match)
AssumeFPS("ntsc_double", true)

# Resample adjusted audio rate back to 48KHz
SSRC(48000) #if crashes, use ResampleAudio() instead

Inability to pre-download YouTube videos:  TubeMate now provides that function.  It’s clunky and buggy, but it works well enough to keep my commutes from becoming too boring.

Alarm volume adjustments lock up phone:  Adjust the alarm volume using the Settings->Alarm path instead.  Whatever you set it to, all new alarms will inherit, and you can adjust that all you like without consequences.


I think it’s a great phone if the above Cons don’t affect you and you’re looking to join Verizon and get a new phone before April 2015.  (After April, I believe the new Samsung is coming out, and it remains to be seen how that compares.)

Most people will use the phone on full auto, and it is very, very good at that.  Just don’t expect manual fine-tuning of a few things.

Posted in Digital Video, Technology | 1 Comment »


Posted by Trixter on October 5, 2014

October is “National Cyber Security Awareness Month”, whatever the hell that means.  In recognition of this dubious designation, I’ve made an HD remaster of the 1990 documentary Cyberpunk available.  Consisting of interviews with William Gibson, Jaron Lanier, Timothy Leary, Vernon Reid (from Living Color), and Michael Synergy, and briefly featuring a few industrial bands such as Front 242, Manufacture, and Front Line Assembly, the documentary provides a look at what the cyberpunk movement was circa 1990.  Subjects such as cyber-terrorism, cybernetic implants/enhancement, virtual reality/telepresence, and general counterculture rebellion against “The System” are touched upon.  Inevitable comparisons with Akira are made.

Here Be Dragons

While the producer and director did an admirable job making the source material interesting and presentable to the public, there are a lot of flaws with the documentary.  Some are minor and can be overlooked, such as the 1990s trend of inserting faux computer graphic overlays (to try to make the material more similar to the world Gibson painted in Neuromancer).  Many of the problems are with pacing; there are entire sections that focus on a particular subject for too long, sometimes without impact.  One section in particular goes so long that different digital effects start to fade in and out after a few minutes, almost as if the editor was bored and resorted to doing something with the image to keep the viewer’s interest.

There are also some very misrepresented facts and predictions, but it’s not really fair to criticize a documentary for failing to predict the future correctly.  That being said, there are some real howlers in here, from the supposed power hackers wield(ed) against governments, to the silly, amateur computer graphics that obscure hackers’ identities, to the heavily hinted-at concept that Neuromancer itself was responsible for shaping technology and history.  The most egregious is equating hacker with cracker (although, to be fair, that’s happened multiple times before and since).

A special mention must be given to Michael Synergy, who perfectly embodies the huckster who started believing his own bullshit.  Some of his claims in the documentary are so utterly, patently ridiculous, so incredibly pretentious, that it takes a great deal of willpower not to scream at him when he’s talking (especially when he mispronounces the word “genre”).  Were I him, I would have wanted this stage in my life to disappear, and it seems as if that wish has come true: His moniker disappeared with the 1990s.  My personal wild speculation is that once the real, actual revolution of the web occurred and it was able to finally call him out, he quietly exited stage left.  (Last I heard, he worked for Autodesk in the mid-1990s, was going by his birth name again, living in Hawaii, working in IT; if anyone has a real update, I would love to know what actually happened to him.)

Most depressingly, there is a real missed opportunity with how Jaron Lanier’s involvement was portrayed.  In the documentary, he comes across as a stoner who only mentions VR, which is a shame because — then and now — he’s the most relevant and accurate representation of a hacker that the documentary includes.  Of everybody interviewed, Jaron is the only person who is still exploring these concepts and ideas, and more importantly their unintended fallout, which you can read about in his most recent book Who Owns The Future?.  (Even if you don’t buy the book, follow that link and read the Q&A to get a feeling for his concerns.)

Worth watching?

While it may be hard to sit through, the documentary retains glimpses of the innocent, wildly-optimistic, techno-hippie idealism that grew with the rise of personal computing and networking.  For that nostalgia factor alone — the time when the Internet existed but the World-Wide Web did not — it’s worth an hour of your time.  It’s also worth watching to catch which ideas were especially prescient, such as:

  • Whoever holds the most information holds the most power
  • Every device will be interconnected
  • Physical boundaries will not impede meaningful communication
  • People will be individual, mobile, uncensored “broadcast stations” (considering I can post to youtube from my phone, I’d call this a reality)
  • The “matrix” as a concept and/or allegory for God (later realized almost to the letter in The Matrix movie trilogy)

…and so on.  You could make an interesting drinking game out of catching which ideas succeeded (although you’d get more drunk, quickly, by catching all of the stupid and inaccurate comments).

Cyberpunk: The Documentary is now available at archive.org.  Grab the MPEG-TS file if able; it’s 60p, Blu-ray compliant, and won’t take up too much space in your memory implant.

Posted in Digital Video, Entertainment, Technology | Tagged: , | 1 Comment »

8088 Domination Source and Encoder Released

Posted by Trixter on August 11, 2014

I’ve formally released the source and binaries for the 8088 Domination encoding system under its original working title: XDC (stands for X86 Delta Compiler).  Head on over to x86dc.wordpress.com to browse the github source, grab some example videos, browse the documentation, or watch a screencast where I encode a video from farm to table in under 30 minutes.

Now you too can impress your friends with your own custom videos that run on a 4.77 MHz CPU with 16K of video memory!

Posted in Demoscene, Digital Video, Programming, Vintage Computing | Leave a Comment »

Out, damned bug! out, I say!

Posted by Trixter on July 29, 2014

The response to 8088 Domination was warm, wonderful, and widespread. To everyone who dropped me a note via twitter, email, or youtube — and there were thousands of you — I want to thank you for the kind and encouraging words.

Even before I finished the design, I knew that I was going to release all of the source, so that others could make their own videos for their own vintage systems. I was careful to design the system to be easy to understand, so that it could be easy to port to other languages or extend with new features. I have a lot of comments in the code, some fairly verbose, so that there is no confusion why something is designed a particular way, or why one operation happens before another. I want this to be representative of the quality of code I usually write.

So, why am I overdue in releasing the code? Bugs! Or, more accurately, edge cases. To ensure that the encoder works properly in the real world, I’ve been testing it with vastly different sources: Animations, music videos, cartoons, even a full-length movie. And almost every time, I encounter a new edge case that needs fixing. Oh, don’t worry — The code isn’t full of special cases or bubblegum-and-shoestring workarounds. It just takes time to address each issue that crops up, and determine if it’s a true bug that needs fixing, or an issue that can be safely ignored.

“Ignore issues in code? Impossabru!” Actually, here’s an example of what I mean: I discovered a few weeks ago that I could improve the efficiency of the output a few percent by re-running some optimization phases before final compilation. However, doing this will sometimes create a small “empty” 1-byte delta that actually isn’t a delta (ie. the locations contains the same data in the previous and next video frames). It’s a bug, but is it worth fixing? I could spend days rewriting the optimization phase into a gigantic, monolithic procedure where all parts coordinate… or, I can throw these 1-byte non-changes away at the end of the existing optimization phase. You can guess which path I chose.

Some bugs are indeed bugs, and they must be fixed before I put my name on the code. For example, the bug that forced the encoding loop into a deadlock, or the bug that randomly produces black flashes in the output (still working on this one), or the bug whose generated code forgot to set a single register which prevented videos from being played without a soundcard present.

So, I hope everyone understands why the code release is late. Well… one of the reasons it is late. The other reason is that making your own videos will require some documentation (some user-directed preprocessing of the source video is necessary — sorry!), and a video showing the steps involved couldn’t hurt either, so that will require a few days by itself.

While you’re waiting, why not help me decide what movie to convert and release with the final distribution? In keeping with the spirit of the time period, I’m going to convert an entire full-length movie using the system, and ensure that it will fit onto a single CD-ROM so that users without homebrew XTIDE controllers can hook up a SCSI CDROM drive and enjoy the flick (ironically). The defacto example for this kind of thing is Star Wars, although I’m partial to TRON, as it was released after the IBM PC itself was and has its own share of iconic sequences. But, I’ve already done TRON to death, so what would you like to see? Vote in this handy poll, and if the movie you want to see isn’t there, please write your choice in the comments.

Posted in Demoscene, Digital Video, Entertainment, Programming, Vintage Computing | 6 Comments »

8088 Domination Post-Mortem, Conclusion

Posted by Trixter on June 20, 2014

This is the second (and last) part of my write-up on how 8088 Domination was achieved; the first part is here. I reuse some terminology introduced in the first part, so before we continue, it’s worth reviewing some key definitions to avoid confusion:

Delta: An area of bytes that needs to change in screen memory to update the displayed image
Slice: A delta where not all bytes are the same value
Run: A delta where all bytes are the same value

On to the nitty-gritty!

Breaking With Tradition

If you’ve coded video or animation systems in the past, you may have correctly identified what I’m doing as less of a video codec and more of an animation system. Animation systems from the 1980s such as Autodesk Animator or DeluxePaint Animation store and play back deltas by iterating through data that describe what areas of screen memory to change, using codes and chunk types for things like “skip N pixels forward, then change M pixels”, “fill entire screen to solid color N”, and so on. This reduces the size of the file, but requires some decision-making and branching while iterating through the data.

I initially did the same thing, and wrote a fast routine that would iterate through a list of deltas to replay, handling runs using the efficient REP STOSB sequence, and the rest with REP MOVSB. It looked something like this:

Delta format:
0-1: start offset
2:   length in upper 7 bits and run/slice in LSB. If set, is run.
3:   fill value (if run; unused if slice)
4-N: data (if slice)

Decompressed via:
; Prior setup:
; DS:SI=source data
; ES = destination (screen RAM)
; DX = number of deltas to iterate through

    lodsw       ;load offset
    xchg di,ax  ;get destination ready
    lodsw       ;load bit, length, value
    shr ah,1    ;isolate LSB?
    mov cl,ah   ;move length into place
    jc @run     ;if so, it's a run
                ;runs are the exception; slices should fall though first
    rep movsb   ;copy slice to screen
    ;okay to fall through here since cx=0, rep stosb will do nothing
    rep stosb   ;replay run to screen (AL already has value)
    dec dx
    jnz @loopit

This is optimal 8088 code for this operation, but the idea has two problems. First is a minor annoyance; a byte is wasted storing a fill value even if we aren’t handling a run. But the real problem is that there are two branches (JC and JNZ) for every delta we iterate over in the list. Branches are costly on almost all CPUs, even those as old as the 8088. This was a huge concern for me, as the average new image in my test material was made up of roughly 600 deltas, most of them slices. Some quick math to illustrate why this concerned me:

# of cycles available to us to paint a frame: About 40,000
# of cycles taken up by one untaken (JC) and one taken (JNZ) branch: About 24
# of cycles used by branches to replay 600 deltas: 14,400 (36% of our total)

So, in a typical frame full of changes, more than a third of our available CPU time is wasted handling branches. In a system where we have the same time quantum as 8088 Corruption but are trying to change more data than it did, this was a big step in the wrong direction!

I thought of a few ways to mitigate this cost, such as unrolling the loop, rearranging deltas so that slices and runs are grouped together, and so on. This went on for about an hour before inspiration struck: Why not eliminate the branches altogether?

And just how the hell do you do that? By generating code instead of data. Instead of having the encoder spit out a description of what changes need to be made each frame, we switch to generating x86 opcodes that, when executed, directly implement the changes themselves.

This is the same strategy used to accelerate sprite plotting on many architectures, but when I realized I’d be doing the same thing for the entire screen, I started laughing out loud. What a ludicrous idea! And yet, in practice, you can see that it works.

It’s A Compiler!

The basic structure of a video “code” frame looks like this:

Startup code: Sets ES to point to the screen and DS:SI to point somewhere below its own instruction stream to where the data stream starts
Instruction stream: Instructions that re-point DI to new screen memory destinations and then issue combinations of MOV, MOVSB, STOSB, REP MOVSB, or REP STOSB to change screen memory contents
Cleanup code: A single RETF instruction to return to the caller
Data stream: For (REP) MOVSB, data that gets moved to areas of screen memory

As long as the code is aligned to a DOS 16-bit paragraph boundary, it will execute properly, so the player code enforces alignment of the frame data to paragraph boundaries. Not doing so results in hilarity, as the correct screen memory locations will be changed properly, but with data from the wrong place:

This is supposed to be an anime girl, not digital vomit

This is supposed to be an anime girl, not digital vomit

(It is, of course, quite possible to rewrite a few values in the code to get it to execute properly wherever it is located, but I didn’t want to perform code fixups realtime at 60hz — the system is already slow, let’s not make it any slower.)

Because the instruction stream adds size and processing time to the video data, it’s important for us to generate optimal code that is as fast as possible without being too large. For example, if you want to set a single byte in an arbitrary location pointed to by ES:, most x86 asm programmers would do it like this:

ES: MOV BYTE PTR [nnnn],val

This is fast and is 5 bytes in size. But if you have your value pointed to by DS:SI, you can also do it like this:

MOV DI,nnnn

This is also 5 bytes (4 opcode, 1 data) but is slightly slower… but because MOVSB advances DI automatically, it can save you from having to do the same thing manually. For a single byte it’s not a win, but what if we have three bytes to change? We can continue to set them directly:

ES: MOV WORD PTR [nnnn],mmmm
ES: MOV BYTE PTR [nnnn],mm

…or do this instead:

MOV DI,nnnn

The latter method is much smaller and slightly faster. (This can go on for a while, but eventually there is a break-even point where switching to REP MOVSB is faster than all other encodings.)

Although I had worked out most optimal combinations for various slice and run durations, in the end I felt it was better to just have the compiler generate every code variation, calculate how many cycles each one took to execute, and pick the fastest one. (I figured it was safer and more future-proof than me trying to hand-optimize generator output.) Calculating cycle counts for the 8088 is almost as easy as it is for 8-bit CPUs; the 8088 has only one core, no cache, no threads, no pipelines, no out-of-order execution… it does have a prefetch queue, but it is only 4 bytes long so it isn’t very effective. The major factor in optimizing 8088 code for speed is minimizing memory accesses, because the CPU takes 4 cycles to read (or write) a byte — any byte, even instruction stream opcodes. So, in most cases, the smallest code usually wins. The only exceptions to this rule are instructions that take an extremely long time, such as MUL/DIV, which can run for over 100 cycles depending on the operands.

Andrew Jenner, a good friend and a better coder than I am, has an excellent rule of thumb for determining 8088 code execution speed: Multiply the size of the opcode and the size of any data touched by that opcode by 4 for an informal cycle count; then, also determine the sum of each instruction’s published cycle count. Whichever number is larger is the more accurate execution time estimate.

I won’t go over the code generator itself in this write-up because it is very mundane and not terribly exciting; refer to the source code when I release it in a few weeks.

Delta Optimizations

Once I had an idea of the code generation cost, I came up with a couple of ways to reduce that cost by manipulating the delta list before it was processed by the encoder. Less work for the compiler to do meant smaller code/data and faster execution. Delta optimization consists of four phases:

  1. Run Identification and Splitting. Because runs process faster and encode much smaller than slices, it is a huge win to identify any runs hiding inside of slices and split them out into their own delta. This phase also marks any runs it finds as excluded from further processing (“frozen”), as runs are already optimal.
  2. Pixel “Shaving”. Changing only a single byte in screen memory has a very high cost (5 opcode bytes, plus the time they take to execute) so pixel “shaving” looks at each single-byte delta to determine how many pixels are actually changed by the byte. If a particular threshold is not met (ie. “more than one pixel”), the delta is dropped completely. This is a user-configurable option and is off by default.
  3. Delta “Shaving”. Identical to pixel shaving, except entire deltas are dropped if they aren’t large enough. The default threshold is “more than two bytes”; anything smaller is dropped. This is also user-configurable, and also off by default.
  4. Delta Combination. This phase looks for deltas that are spatially close to each other in linear memory and combines them together if the end result would encode as less bytes. For example, assume we have three 1-byte deltas all spaced one byte apart. Also assume that replaying these three deltas costs 5 bytes each, for a total of 15. Now consider what happens if we combine all three deltas into a single delta spanning the three changed bytes: The number of bytes changed onscreen will grow by 2, but we shed 10 bytes because we only have one delta to set up and replay. It is a net win, so it is always worth it to try to find combination opportunities. (This is technically an NP-hard problem, and implementing it quickly and stupidly as an exhaustive search greatly slowed down the code. I optimized it by re-sorting the deltas by start offset, so that the search space is localized around the delta(s) being examined. After all the combinations are found, the deltas are re-sorted back into the order that assists the encoding phase, as described earlier in part 1.)

All of these phases reduce size and execution cost. The pixel shaving and delta shaving phases have the added benefit of cleaning up the source video a little; if a pixel is “shimmering” over time due to being right at the error threshold of the dithering process, it will be “stable” with pixel or delta shaving turned on. The drawback to the shaving phases, however, is that the output can leave “trails” onscreen as smaller changes are never fully cleaned up or overwritten. Use with caution.

(While not benefiting optimization, there is also a prep phase that executes before the other phases and performs oversize delta splitting, which takes deltas that are too large to execute within our available cycle pool and breaks them up into smaller deltas. This is always necessary when the entire screen changes, as this creates a delta list that consists of only one single delta with a start offset of zero and an end offset at the edge of screen memory. A delta that big is way over both the available byte and cycle limits, so it has to be split into smaller chunks to be replayed over multiple passes.)

Playing With Variable Packets

The player for the 8088 Domination FMV data is very similar to 8088 Corruption: By controlling the size of the audio data the soundcard interrupt handles, we can get the interrupt to fire at our desired video framerate and use the opportunity to update the video as well. The interrupt handler pulls data out of a queue and updates the screen at the same time it updates the audio. While the interrupt is firing in the background, a foreground loop is constantly reading from disk and putting data into a queue. I cover this in more detail in 8088 Corruption Explained, so if you have a half hour to kill, I highly recommend snagging the MPEG-2 file (best quality) and watching it.

Where the players differ, however is in two areas:

  1. Instead of moving video data to screen RAM, the Domination player CALLs the video frame code, which executes and then returns
  2. The read-and-fill-memory loop, as well as the interrupt handler pointer management code, deals with variably-sized video+audio packets; this is because the output of the encoder varies in size based on how many changes are present from frame to frame

Two changes were made to the muxed video+audio stream for Domination that not only enabled handling variably-sized packets, but also sped up disk reads. The first change was to align each packet of video+audio data to disk sector boundaries, which sped up disk reads due to the way DOS handles buffering: DOS will normally transfer disk requests into its own internal buffers (if you’ve ever wondered what the BUFFERS= line in CONFIG.SYS was for, now you know) and then copy to the caller’s buffer. However, if the caller requests reading a sector-aligned offset (and amount) into a normalized paragraph-aligned pointer, DOS is smart enough to instruct the BIOS to transfer the data directly to the caller’s buffer. This made disk reads return a little quicker, as DOS’s usual double-buffering step was avoided.

The second change to the player was to keep track of how large each video+audio packet was. Rather than put size headers before each chunk, or scan the entire file before starting to determine sizes, I chose to write an index to the end of the file stream. The index consists of one byte per video+audio packet, where each byte indicates the size of the packet in sectors; this is possible because each packet is guaranteed to be aligned to sectors. (This limits the size of each packet to (255*512) = 127.5KB, but we will never see a single frame that large in practice; in fact, we will never see a packet larger than 64KB because that is the 16-bit real-mode segment size limit.)

The most amount of time I spent enhancing the player for the Domination format was, to my surprise, the memory handling. The original player used a circular buffer (a FIFO queue) that fed disk data into the “head” while the interrupt handler grabbed data from the “tail”. Typical circular buffers are divided into fixed-size blocks, but I had just switched to variably-sized packets. I needed a FIFO that could:

  • Accept variably-sized allocations/deallocations
  • Manage a linear area of memory with wraparound without requiring that area to be a power-of-two length
  • Be interrupt-safe (ie. no locks/mutexes/semaphores required to put data in or pull data out)
  • Always return paragraph-aligned normalized pointers
  • Perform all of the above while dealing with 16-bit real-mode segment limitations

In the end, I wrote code that was not so much a typical circular buffer, but more of a heap manager where the caller “reserves” an amount of memory, receives a pointer to an area they can use, fills it with data, and then “commits” the area. Prior commits can be retrieved in FIFO order using another function. The design is interrupt-safe because reserves and commits don’t touch the data structures that are used by retrievals, and vice versa. I know it sounds stupid to be proud of a data structure, but I was pretty pleased with myself when I finished implementing it. (I’ve since learned there is a similar construct out there called a bip buffer, but a bip buffer wastes more memory and time than what I came up with.)

In Search Of Test Material

With two fully-functioning video modes supported by the encoder, I now had to choose some test material to show it off. For the color portion, I decided to use exactly the same footage that I’d used 10 years earlier with Corruption, so that people could directly compare them and see for themselves that the newer method was better overall. For the B&W portion, I had difficulty picking material; I was about to go with one of the old Apple silhouette-style ipod/itunes commercials until I saw a Game Sack episode where a homebrew Genesis cart was shown playing the Bad Apple animation. I was hooked — it was perfect test material. High-contrast shadow puppetry lent itself very well to my “animation compiler” because, most of the time, very little is actually changing per frame, and what is changing has very clean residuals.

Finding a clean source of the Bad Apple animation proved more difficult than I thought it would be. Every youtube version had resizing or transcoding artifacts, so after some research I found the original japanese video-sharing site it originated from and grabbed it from there, which resulted in an extremely clean 30fps source to work with.


8088 Domination may be my best work; I’m very proud of the result. I had to think creatively and unconventionally to solve problems. If people are considered artists based on the uniqueness and viewpoint of their output — paintings, novels, musical works — then I’d like to think programmers can be artists too, and judged by the same conventions.

I want to fix a few bugs in the source and tidy it up, and once I’ve done that I will release the source, executables, and documentation so that you can create your own videos using the system. Until then, enjoy a video of the competition I showed it at, complete with audience reaction:

Posted in Demoscene, Digital Video, Programming, Vintage Computing | 15 Comments »

8088 Domination Post-Mortem, Part 1

Posted by Trixter on June 19, 2014

These next few posts are a technical write-up of how I created 8088 Domination, which is a program that displays fairly decent full-motion video on a 1981 IBM PC. Before reading further, it might be worthwhile to familiarize yourself with the production itself if you haven’t seen it. Here is video of what it looks like running on a 1981 IBM PC 5150 connected to a TV via the composite output jack:

…and if you doubt the above video’s authenticity, you can download the production and run it on your own vintage hardware as proof.

I would like to apologize in advance for switching between past tense (ie. my thought process during the design phase) and present tense (when describing how to achieve certain objectives) throughout this write-up.


I’ve been fascinated by desktop digital video for decades. Starting with the first Video For Windows tools in the early 1990s, using early capture hardware (like the Media Vision Pro Movie Spectrum, which captured directly into MSV1/CRAM format), I’ve toyed with desktop video and its possibilities.

Of course, I’ve also been fascinated with demos since the early 1990s, and when you love two things that much, you find ways to mash them together. Many of my demoscene productions have had video as a part or whole of them in some way. Grind (1993) was a modplayer that displayed dancing Spring Break girls in perfect time to the music; the end-part of Explicit (1996) has dancing clubbers embedded in cubes, also dancing in perfect time to the music; and of course 8088 Corruption (2004) which displayed FMV on a 1981 IBM PC and took a chunk out of my 15 minutes.

8088 Corruption was not recognized for amazing quality video, but rather its novel solution to the problem of trying to display FMV on such a limited platform. That solution was to use 40×25 text mode, since that was the only mode represented in an amount of video memory that could be completely changed every frame without distracting artifacts or overtaxing the system.

I gave a talk in 2007 that explained 8088 Corruption in detail, and in that talk I explained that displaying FMV using CGA in graphics mode would be impossible. This is because CGA graphics mode uses 8x the amount of video memory that 8088 Corruption was handling. Even a simple calculation assuming 24fps video reveals that the amount of data needing to be updated per second (24fps * 16KB = 384KB/s) is outside of the IBM PC’s capability: CGA RAM can only be changed at a rate of 240KB/s, and most hard drives of the era operate at roughly 90KB/s. It sure felt impossible, so that’s why I said it.

Then I thought about the problem for 7 years.

Adjusting Expectations

I came to several realizations over the past few years thinking about this “impossible” problem. The most basic realization is that it isn’t always necessary to change every single pixel on the screen to show convincing FMV. If only a portion of the frame is in motion, such as in traditional animation, or a “talking heads” newscast, not every pixel is changing onscreen. But even if all pixels in a frame are changing, they might only be changing just a little bit… and if they are changing that little, the viewer might not notice if we cheat and don’t change them at all. So, we need to let go of the idea that we must change the entire screen every frame.  With the right picture content, we can get away with changing only a portion of screen memory, and it will look like we’re changing the entire screen.


Left: Black and White=differences, Grey=unchanged. Right: What the user sees after applying only the changes

Another realization was found re-evaluating what kind of bandwidth is available on the system. Previously, I was using the “40×25 text mode 2K of RAM” definition as the limit of what can be changed onscreen. But “2K of data changed” isn’t the real limitation; rather, it’s “not exceeding the time it takes to alter 2K”. In other words, the real limitation is not exceeding a specific time quantum per frame updating screen memory, no matter how much screen memory you’re changing.

This lead to a similar thought about disk I/O. In Corruption, I treated disk bandwidth as a series of fixed-length reads at a frequency of the framerate I needed to produce. As a result, Corruption uses fixed-length video+audio packets, so that (framerate * packetsize) is guaranteed to be less than the available disk bandwidth. But there’s no reason to adhere to that; the real limitation is the disk I/O itself, and I could read any amount of disk I wanted in a given time period as long as I didn’t exceed what the hardware could provide. This lead to the realization that video+audio packets could be of a variable length.

The combination of these and other ideas led to the design of a system that could fulfill the original requirement of “FMV in graphics mode”, which means we need a system that:

  • Finds differences between frames and updates screen memory with only those differences
  • Keeps track of how much time it takes to change screen memory and doesn’t exceed it
  • Keeps track of how much disk space is used and doesn’t exceed the transfer rates of the disk subsystem
  • Most importantly, degrades gracefully if we run out of CPU time or disk bandwidth before the source image is finished displaying

Because the IBM PC is not fast enough to make these decisions realtime (nor has the bandwidth to do so), all of these constraints needed to be handled during the encoding phase, when we have nearly unlimited time. That also means that most (if not all) of these constraints are really just data management problems, which I’ll get to later. But before we write even a single line of code, we need to back up a little and see how to convert 24-bit color pictures into something that doesn’t look like total crap on a CGA card.

CGA FMV Graphics Conversion

A quick primer on CGA: CGA was the first color graphics standard for the IBM PC, and contains 16KB of memory to hold screen contents. That memory can be presented as a few different graphics modes that appear different, but are actually all the same under the hood: 80 “byte columns” wide by 200 lines. Those 80 byte-columns can be sliced up a few different ways:

  • 640×200 with 2 colors (1 bit per pixel)
  • 320×200 with 4 colors (2 bits per pixel)
  • 160×200 with 16 colors (4 bits per pixel)

Let’s see what those look like in practice. Here’s a pretty hummingbird:


…and here’s that hummingbird converted to our three basic choices without any fancy processing:


160×200, 16 colors


320×200, 4 colors, palette 1

640x200, 2 colors

640×200, 2 colors


Already the graphics programmers in the audience are screaming DITHER THAT SHIT, and they would normally be correct. However, most dithers that people are familiar with are error-diffusing dithers, which means that error differences are propagated throughout the entire picture. This is not a good idea for a system that is going to be recording changes between frames just like LoL does in their challenger series, because if a single pixel changes in the source material, every single pixel in the dither algorithm’s row scanning order after the change is going to change as well. What we need is a so-called stable or localized dither, where the value of each output pixel does not depend on its neighbors. That way, when only a local area of the source material changes, only that area of the destination dither will change as well.

While there are some modified, alternate dithers that keep error propagation to a spatial minimum (ie. Riemersma dither, which dithers along a Hilbert curve), the only truly localized dithering algorithm I know of is ordered dithering, sometimes called Bayer dithering. Some of the best publicly-available research on color ordered dithering is by Bisqwit (“Yliluoma’s arbitrary-palette positional dithering algorithm“) so I won’t try to explain the concepts here; consult that instead.

Applying even a rudimentary ordered dither to our sources now shows this for our hummingbird converted to our three graphics modes:

160x200, 16 colors, ordered dither

160×200, 16 colors, ordered dither

320x200, 4 colors, palette 1, ordered dither

320×200, 4 colors, palette 1, ordered dither

640x200, 2 colors, ordered dither

640×200, 2 colors, ordered dither

Much nicer, and we don’t have to worry about the dither introducing massive differences between frames if only one pixel changes in the source.

While all three of those modes look viable in their own way, for Domination I had to make the decision to discard the middle one (320×200, 4 colors). This is because the 16-color and 2-color modes display properly via the composite output jack on the CGA card, and I knew I’d have to present at the compo with the native hardware hooked up to a projector.  So my choices were limited to 160x200x16 and 640x200x2.

Managing Ch-Ch-Changes

The very fastest way to change screen memory on the 8088 is to either copy a linear sequence of bytes using REP MOVSB, or set a linear sequence of bytes all to the same value using REP STOSB. With this in mind, I chose to keep track of each change as a starting and ending offset of changed bytes, because that was closest to the exact method I would be using to “replay” those changes. I adopted a terminology to reduce confusion throughout the source code: I coined the term “delta” to refer to an area of changed bytes, and further classified deltas into “slices” (a delta where not all bytes are the same) and “runs” (deltas where all bytes are the same). This was important because “runs” can fill screen memory extremely quickly using REP STOSB, so it was worth keeping track of them. Here is the actual data structure I used:

    startOfs:word;         {starting offset of delta in buffer (first changed byte)}
    endOfs:word;           {ending offset of delta in buffer (last changed byte)}
                           {rest are calculated during insert or other operations:}
    dtype:deltaType;       {slice or run}
    fillvalue:byte;        {for informational purposes only, not used for anything}
    blength:word;          {length of delta in bytes}
    numPixelsChanged:word; {# of pixels changed in the entire delta}
    REPcycleCost:real;     {different whether slice or run}
    frozen:boolean;        {flag used to control various optimization phases}

    Constructor Init(starto,endo:word);
    Destructor Done; virtual;

All of these deltas needed a good structure to collect and manage them, so I decided to use a Turbo Pascal 7 data structure called a TSortedCollection, which is analogous to a sorted linked list. Deltas were added to this list and maintained in sorted order using a variety of criteria in the following priority:

  1. Larger/Longer Deltas placed before smaller/shorter ones
  2. Given the same length, “runs” placed before “slices”
  3. All runs sorted by their fill value
  4. Given the same length, all deltas sorted by their interlaced start offset in memory

The sort order seems like an odd thing to mention here, doesn’t it? I’ve done so because it was a contributing factor to the success of the project, as it solved many data processing problems later on. It was so helpful that I think it’s worth going over in detail:

  1. Larger/Longer Deltas placed before smaller/shorter ones: As we process deltas from the top of the list to the bottom, the largest areas of the screen will change first until we run out of bandwidth. If we decide to completely jettison the list entirely before it is finished due to bandwidth starvation, only the smallest changes at the bottom of the list — the ones that don’t matter very much visually, as they only change a few pixels — are discarded.
  2. Given the same length, “runs” placed before “slices”: This gave priority to runs, because setting an area of screen memory with REP STOSB is both faster and smaller than trying to copy the same area with REP MOVSB.
  3. All runs sorted by their fill value: This came in handy during the encoding phase, where I was able to set a fill value once and then cache that value for subsequent runs if applicable. I would not have been able to do that if all the similar fill values weren’t right next to each other.
  4. Given the same length, all deltas sorted by their interlaced start offset in memory: This was an aesthetic choice; if I had to change an area where all deltas were exactly the same length (like a rectangular area of the screen), this allowed the changes to “paint” in somewhat linear top-to-bottom order.  This is necessary because CGA’s framebuffer is not organized in a linear fashion, but is interlaced.

For the curious, this is what the sort operator looked like:

Function TDeltas.Compare;
We want to maximize motion fidelity, so we are going to make our lives
easier and sort deltas by size.
Note that the comparisons are reversed -- that's because we want the largest
deltas at the beginning of the list.  (-1 means beginning of collection.)
- We add a second comparison so that deltas of the same length are sorted
by cycle execution time; this keeps runs prioritized over slices in terms
of what to prioritize if our bandwidth gets starved.
- We add a third comparison so that runs of the same run value
are kept together, so that we can cache the run value.
- We can optionally add a fourth comparison so that, all things being equal,
deltas are sorted by start offset compensated for CGA interlaced memory layout
(this is what I'm colloquially calling "Dave Murry interlace handling"
based on how Dave handled it in the PC booter Sierra Championship Boxing).


  if MurryHandling then begin
    {if k1so > screenRAMsize div 2 then k1so:=k1so-(screenRAMsize div 2);
    if k2so > screenRAMsize div 2 then k2so:=k2so-(screenRAMsize div 2);}
    k1so:=k1so AND $1fff;
    k2so:=k2so AND $1fff;
  {sort by delta length}
  if PDelta(Key1)^.blength > PDelta(Key2)^.blength
    then Compare := -1
    else if PDelta(Key1)^.blength < PDelta(Key2)^.blength
      then Compare := 1
      {sort runs at a higher priority than slices}
      else if PDelta(Key1)^.REPcycleCost  PDelta(Key2)^.REPcycleCost
          then Compare := 1
          {sort runs by fill value}
          else if PDelta(Key1)^.fillvalue > PDelta(Key2)^.fillvalue
            then Compare := -1
            else if PDelta(Key1)^.fillvalue < PDelta(Key2)^.fillvalue
              then Compare := 1
              {sort deltas by start offset}
              else if k1so  k2so
                  then Compare := 1
                  else Compare := 0;

With the data structures finalized, encoding one video frame’s worth of changes becomes a simple matter of doing the following:

  1. Scan the preexisting frame and new source image for differences, and insert each found difference into the delta list data structure
  2. Determine the amount of CPU time and disk bandwidth we have available to paint our changes (a “byte pool” and a “cycle pool”)
  3. For every delta in the list, starting with the first and iterating to the last:
    1. Estimate the cycle and byte cost to paint this delta
    2. If our cycle or byte pools won’t be exhausted by these costs, encode the delta to disk and then remove it from the list, then updated the pools
    3. If one of the pools will be exhausted, skip this delta and move to the next one, which will likely be smaller or faster (or both) and have a better chance of not exhausting a pool
  4. If we empty the list, we’re done with the frame
  5. If we reach the end of the list, we are out of “time” for this frame and need to either:
    1. Spend the next frame going through the list again to finish up so that the existing source image has a chance to fully display, or:
    2. Throw away whatever is left and start working on a new source image

That’s pretty much it. When I release the source in a few weeks, you’ll see that the code implements the above logic very closely. However, there’s one very important detail that I haven’t explained yet, and it’s critical to playback performance: What does “encode the delta to disk” mean exactly, and how do we play an encoded delta back to screen memory as quickly as possible? Head on over to the conclusion of  our thrilling tale, where I’ll explain how I came up with a unique solution to that problem.

Posted in Demoscene, Digital Video, Programming, Vintage Computing | Tagged: , | 48 Comments »

8088 Domination

Posted by Trixter on June 17, 2014

A few days ago, I debuted 8088 Domination at @party 2014, which is an official sequel to 8088 Corruption that I made 10 years earlier. Like the former, 8088 Domination displays full-motion color video with audio on a 1981 IBM PC with CGA, a Sound Blaster, and any hard drive — but, unlike the former, Domination uses full graphics mode whereas Corruption used only text mode. This is significant because graphics mode requires 8x more memory and processing, and I had to combine a lot of creative and technical disciplines in order to pull it off.

Here is a capture of 8088 Domination running directly off of my IBM PC 5160; video is the composite CGA output signal, and audio is from a Sound Blaster Pro 2.0.

I am working on a postmortem write-up so you can all learn how I did it, but until then, download the party version at https://www.scene.org/file.php?file=%2Fparties%2F2014%2Fatparty14%2Fdemo_oldschool%2F8088_domination_party_version.zip&fileinfo if you’d like to run it on your own vintage hardware.

PS: A second round of thanks to Great Hierophant, for without whom I wouldn’t have been able to show off my production. He provided the hardware locally that I was unwilling to ship across the country.

Posted in Demoscene, Digital Video, Programming, Vintage Computing | 15 Comments »

I grow tired of the technologically ignorant

Posted by Trixter on February 29, 2012

(This post is overly subjective, more opinionated than my usual efforts, and contains some cussing.  Consider yourself warned.)

I am sick and tired of people who shun technology and progress under the guise of “I’m an old tech veteran, I’ve been working with technology for 30 years, and the new stuff is crap compared to the old stuff.”  People who defend this viewpoint are idiots.  I’m not talking about audiophiles or other self-delusional “prosumers”; I’m talking about people who have worked a tech trade or had hands-on access to technology for many years and think that their perceptions trump reality.  It’s a perverse combination of technology and anti-intellectualism — a form of hipsterism for the over-40 set.

I was prompted to cover this by a recent post on why widescreen monitors are a rip-off (which I will not link to because I truly enjoy the other 99% of this person’s blog, and linking to it would imply that I don’t like him or his site), but the underlying irritation of the entire mindset has been percolating for many years.  Viewpoints that drive me crazy include:

Widescreen monitors don’t make any sense

People think that widescreen monitors are stupid on laptops because most people use laptops for text work, and since text is more comfortable to read in columns, wide columns are harder to read.  This mindset has had the doubly idiotic result of making people think that websites need to be column-limited.  I just love going to a website and having the text squished into a 640-pixel-wide column with 75% of the screen unused.  Don’t like how narrow columns look on a widescreen monitor?  Use the extra space however you want — put up two web pages side by side, or simply don’t look at the unused space.  It’s people like these that also complain that 4:3 video has black bars on either side of it when viewed on a widescreen TV.  It’s called pillarboxing, you idiot, and it’s there to prevent your movie from looking like a funhouse mirror.

Widescreen monitors have made modern laptops better.  A widescreen laptop monitor allows the keyboard to be wider without the depth of the laptop getting too high (to support the height of a 4:3 monitor).  Having a decent keyboard on a laptop used to be impossible without clever wacky engineering tricks; now it is.  Widescreen monitors made ultra-small netbooks possible, so if you’re reading this on a netbook but somehow still disagree with me, you’re a hypocrite.

Analog audio is better than digital

There are entire websites (and wikipedia pages) dedicated to this, usually under the guise of “vinyl is better than CD”.  Most opinions on this subject were formed when analog audio had several decades of mature mastering and production processes, and digital was brand-new (for example vinyl vs. CD in 1983).  Early efforts to put things on CD resulted in some less-than-stellar A/D conversion, which created a type of distortion that most people weren’t used to hearing.  People formed opinions then that have perservered more than 25 years later, even though the technology has gotten much better and all of the early mastering problems have long since been corrected.

People who think vinyl sounds better than CD have nostalgia blinders on.  They bought an album in their youth, played it endlessly, loved it.  Then they buy the same album on CD decades later and condemn the entire format as inferior because it sounds different.  Want to know why it sounds different?  It has a wider frequency range, lacks rumble, lacks hiss, sounds exactly the same after 10+ playbacks, and was remastered with better technology and mixing conditions under the guidance and approval of the original artist when he wasn’t coked or drunk or stoned out of his mind.  People like Pete Townsend, Neil Young and Geddy Lee not only approve of the latest digital technology but are actively utilizing it and going through great pains to remaster their classic albums with it.  People are missing the point that it is the mastering and digital compression that causes issues, not the technology itself.  Neil Young recently spoke at a conference where he damned digital music, but not because it is digital — rather, because it is delivered differently than the artists intended.  Neil Young would like nothing better than for everyone to be able to listen to his music at 24/192.  Can’t do that on vinyl, bitches.

Even people who write about the loudness war get it wrong, despite that it’s an easy concept to understand.  Massive dynamic compression drowns out subtle details and can add distortion, which is horrible — but it is not exclusive to digital audio, nor caused by it.  One author correctly notes that massive dynamic compression butchers mixes, but then subtlety implies that all CDs that “clip” have distorted audio.  Digital audio “clips” only if you drive the signal beyond its digital limits.  If you took an audio waveform and normalized it such that the highest peak reached exactly the highest value, it is “positioned at maximum volume”, not clipped.  Nothing is lost (to be fair, nothing is gained either).

The problem is the mastering and production process, not the technology.  Which segues nicely into:

“I will never buy Blu-ray”

The only valid argument against Blu-ray is that it is harder to make a backup copy of the content.  It is indeed harder than it is for DVD, or laserdisc, or videotape.  That is it.  All other arguments are beyond moronic.  Even the cheapest possible 1080p HDTV viewing setup has five times the resolution of DVD and lacks signal degradation in the output path.  If you view a Blu-ray and can’t tell the difference between it and DVD, you have either a shitty viewing setup, a shitty Blu-ray, or a shitty visual cortex.

Someone recently tried to argue with me that DVDs have the same or better picture than Blu-ray and used Robocop as an example.  The comparison was weighted, as they were comparing the $9 Blu-ray that MGM belched out when Blu-ray was only a year old to the Criterion DVD treatment.  I own both, so I checked them out and I agree that the DVD has better color tonality throughout the film.  However, the Blu-ray thoroughly stomped the DVD in every single other area, most obviously resolution.  So much picture detail is added by the increase in resolution that I actually prefer it despite the lack of Criterion oversight.

The real problem, as previously stated, is how the mastering and preproduction process was handled.  Even with new 2012 DVD releases, you can still see the “loudness war” video equivalent of digital ringing, which used to be an accident but was later introduced on purpose as part of a misguided “sharpening” step.  Listen up:  Any sharpening filter added to any signal doesn’t make things sharper; it makes them appear sharper by overlaying a high-frequency permutation signal over the original content, which increases the acutance.  Quality is actually lost when you do this, as the high-frequency info obscures actual picture detail.

This is another example of perception vs. reality, which not coincidentally also segues into:

“Computing was better in the old days”

I love retrocomputing as a hobby.  I think about it nearly every day; this blog was partially created to talk about vintage computing.  But even I wouldn’t say that things were better in the old days.  People who say this don’t realize they are really trying to say something else.  For example, people who say that “BBSes were better than web forums are today” are actually referring to the sociological fact that, when you communicated with people on a BBS, you were communicating with people who met a minimum level of technical competence — because, if they hadn’t, they would have been too stupid to access a BBS, let alone be proficient with a computer.  The overall technological quality level of everyone you met on a BBS in the 1980s was higher than other places, like a laundromat or a bar.  What such people fail to consider is that modern web boards, while having a higher quotient of trolls and B1FFs, are open to the entire world.  The massive scale of humanity you can encounter on even a tiny niche topic is levels of magnitude higher than it used to be.  The sheer scale of information and interaction you can now achieve is staggering, and completely outweighs any minor niggle that you have to deal with 3 or 4 more asshats per day now.

Here’s another example:  “Computer games were better back in the old days.”  This is wrong.  The proper thing to say is that “Some computer game genres were better back in the old days.”  I can get behind that.  For example, graphics were so terrible (or non-existent!) at the birth of computer gaming that entire industries sprang up focusing on narrative.  For such genres (mainly adventure games), several times more effort was put into the story than other genres.  As technology and audiences changed over time, such genres morphed and combined until they no longer resembled their origins.  That doesn’t mean modern games are terrible; it just means that you need to shop around to get what you’re looking for your entertainment.  Don’t play Uncharted 2 expecting a fantastic story with engaging narrative.  (Dialog, maybe, not not narrative.)  Heck, some genres are genuinely awesome today compared to 30 years ago.  For example, Portal and Portal 2 are technically puzzle games, but the storytelling in them — despite never interacting directly with a human — is among the very best I’ve ever encountered.

About the only argument that does work involves the complexity of older computers — they were simpler, and you could study them intensely until you could very nearly understand every single circuit of the board, nuance of the video hardware, and opcode of the CPU.  Today, a complete understanding of a computer is no longer possible, which probably explains why Arduino sets and Raspberry Pi are getting so much attention.


I have no conclusion.  Stop being an old-fogey anti-intellectual technophobe, you ignorant hipster fuck.

Posted in Digital Video, Entertainment, Sociology, Technology, Vintage Computing | 10 Comments »