Oldskooler Ramblings

the unlikely child born of the home computer wars

Archive for the ‘Digital Video’ Category

How to transcode UHD 4K HDR rips for lower bandwidth to an LG C8 OLED via Plex

Posted by Trixter on April 29, 2022

(This post has been edited with new information)

The LG Cx series of OLED TVs have terrible network chipsets in them: They can do more than 100mbps over 5GHz wifi, but can only do 100mbit/s over ethernet (my experiments with a USB-to-ethernet adapter were mixed). So what happens if you want to stream to your TV over ethernet because you can’t use wi-fi?

Included below is an ffmpeg script I’ve used to transcode UHD 4k blu-ray rips down to a bandwidth that can be handled without trouble using my LG C8’s ethernet connection, which I used during a period when I was unable to use the TV over 5GHz wifi (since corrected, thankfully). It leverages a modern nvidia card to do the transcode without any CPU usage, and preserves the HDR10 information. The end result is worse than the source if you pixel-peep, but if you’re sitting 10 feet away from your projector, it’s perfectly fine — and it’s certainly better than a blu-ray rip of the same material. Here’s the script:

REM This creates a Plex Versions proxy that preserves as much quality
REM as possible without exceeding an LG C8's ethernet 100mbit/s capabilities.
REM Call this batch file from the plex directory containing your main movie.
REM
REM The 120M bufsize represents a 3-second window @ 40M vbr that a max of 60M
REM can be sustained.
REM

mkdir "Plex Versions\LG C8"

for %%a in ("*.*") do ffmpeg -find_stream_info -hwaccel auto -i "%%a" -map 0 -c copy -c:v hevc_nvenc -pix_fmt p010le -tune:v hq -preset:v p7 -color_primaries bt2020 -color_trc smpte2084 -colorspace bt2020nc -spatial_aq:v 1 -temporal_aq:v 1 -b_ref_mode middle -profile:v main10 -tier:v high -b:v 40M -maxrate:v 60M -bufsize:v 120M "Plex Versions\LG C8\%%~na.mkv"

Posted in Digital Video, Entertainment, Technology | Leave a Comment »

It’s been fun

Posted by Trixter on December 26, 2018

My vote for the most subtle-yet-cruel entry in the category of brain mis-wirings is Imposter’s Syndrome. (It is also a first-world problem; apologies if this offends you.) I have it fairly bad, and it’s gotten worse as I age. It has prevented me from starting projects, finishing projects, and having normal levels of self-confidence. I am, thankfully, able to praise and mentor other people as necessary, but I cannot provide that for myself. Combine this with yearly instances of FOMO and you end up with someone who goes through waves of trying to join the party, only to feel sheepish once they get there.

This must end. Not in the manner in which my friends and fans might have hoped, but it simply has to end if I’m going to be available for the people who matter (including, most importantly, myself). So, it ends today. It ends with a list of things I’ve achieved in my life by my own skill, intelligence, and volition. I’m granting myself permission to be proud of everything on this list.

And, most importantly:

  • Met my wife, became a better person for her, and started a family

That is truly my greatest accomplishment. It has directly touched hundreds of lives, contributed something positive to the world, and will outlive me by many decades, if not centuries.

But this person is going away, and this is the third-last post I will be writing on this blog.

Posted in Demoscene, Digital Video, Gaming, MindCandy, MobyGames, Programming, Vintage Computing | 3 Comments »

How to make better YouTube videos

Posted by Trixter on July 4, 2017

(updated 6/22/2020)

I’ve been dabbling with making content-specific videos for YouTube for many years.  Some of my videos have gotten a respectable amount of views and are full of nice comments, while other well-meaning content has forever fallen into the unwatched abyss.  It’s been fun, but I have been giving thought to producing slightly more polished and professional work to see if I can attract a larger audience and get them interested in retrocomputing and personal computing history.

It is easy to get carried away with investing a lot of time and money into making better videos, so I felt some research was warranted before barreling ahead.  I reached out to a few friends and acquaintances, some with podcasting or YouTube careers (ie. they earn their living making media) for some advice.  I also scoured several “How I make my YouTube content” videos from a few channels that make content similar to what I want to produce.  The advice and research was so helpful that I felt it could benefit anyone else in a similar position, so I’ve reproduced the best tips here.

Before reading on, be aware that these are all tips for you to make better videos that you are happier with.  This is not the same thing as making videos that get more views or are more easily monetized (which usually involves pressure to make videos you are unhappy with).

Don’t be afraid to write a script

It may seem daunting to watch some of your favorite YouTube presenters and see them rattling off perfect copy directly to the camera, and wonder how you could ever get that polished.  Some of them really are good at improvising in front of a camera, but it may surprise you to learn that some of them are actually reading from a teleprompter.  This isn’t a sign of weakness, but rather a way to speed everything up while ensuring accuracy: You can put notes in a script for any cutaways or graphics you want to use, which can help during video editing.  It also ensures you won’t forget anything you want to say or stumble over your words.

On a related note:

Don’t be afraid to put yourself in front of the camera

The only drawback to using a script is that you may find yourself with something you really want to say, but have no graphics or cutaways to illustrate what you’re presenting.  David Murray (aka The 8-Bit Guy) gave me the helpful advice that he uses himself talking on-camera as a fallback when he needs to fill video with something he lacks assets for.  (What a brilliant idea!)

There is another benefit to putting yourself on camera:  It may surprise you to learn that some of your potential audience is actually more interested in you than the material you’re presenting.  Content is king, but what gives a YouTube channel a personal feel is you, a unique person, with a unique viewpoint.  There is a portion of your audience that wants community more than information; they are already familiar with your video’s basic content, but they want to hear other opinions and viewpoints.  It reminds me of Douglas Adams’ quote about nerds:

“…a nerd is a person who uses the telephone to talk to other people about telephones. And a computer nerd therefore is somebody who uses a computer in order to use a computer.”

Nerds want to commune with other nerds, so no matter how awkward you feel, see if putting yourself in front of the camera feels right.  If you can show off who you are at the same time you present content, you win twice.

If you are worried about remembering what you want to say, or being awkward on camera, a cheap teleprompter can help reduce performance anxiety.

Background music is a quick way to add polish

If you think your videos feel “stale”, a quick way to add some production value is adding background music, as long as it isn’t distracting or makes the narration/talent hard to hear.  That’s obvious, but you can’t use any music you like because it might be copyrighted and/or not licensed for YouTube videos.  Luckily, there are repositories and catalogs of music you can use that are either public domain, Creative Commons-licensed, or otherwise free for non-commercial use:

YouTube Audio Library: The most obvious choice, and a good starting point.  Everything there can be used even if you monetize your videos, although a short credit/text in your video’s description may be required.  I like starting here because they allow filtering by duration.

Epidemic Sound: A very generous license (as of this writing, $15 a month) and even if you cancel the service your previous videos are still licensed.

Free Music Archive: A searchable catalog of tons of music with various licenses.  However, be sure to read the fine print, as not all music on the site is licensed for video.

Artist.io: A very cheap option, a yearly subscription gets you access to their entire library and you can download as many as you can use. If your subscription lapses, the music you’ve already used is still licensed.

uppbeat.io is the free/freemium version of Music Vine, and is worth checking out if you don’t mind getting free music in exchange for putting the music credits in your video’s description.

bensound: Filterable, and free for youtube use as long as credit is given.

There are many others, as well as multiple resources for finding them, but the above should get you started.

Poor audio recording is WAY worse than poor video

Most people who watch YouTube are forgiving of poorly-created video because of cell phones:  Almost everyone has shot a shaky or poorly-lit cell phone video, and for most people, that kind of quality is “good enough” for getting the point across.  Humans are pretty good at filtering the signal from the noise when it comes to visual content.  Poor audio, however, is more jarring and requires more concentration to listen to.  Unlike bad video, poor audio is much more difficult (or impossible) to fix later in post-production.  If you recorded outside and wind noise completely drowned your voice out, or if you recorded inside in an empty or harsh room and your voice is echoing, there’s no way to fix that.  The only solution is to record decent audio in the first place.  That means:

  • Use the right tool for the right job:  Don’t use an omnidirectional microphone for recording just your voice, but instead use a cardioid mic for close vocal sources, or a shotgun mic for farther-away vocal sources.
  • Don’t record in a harsh room (bare walls, no carpet, etc.) no matter what the situation is.  If you have no choice, hang heavy blankets on the walls to try to stop sound from echoing.

If recording voiceover and podcasts, Jeremy Williams of tested.com swears by this combination:

  • Audio-Technica AT2020USB Cardioid Condenser USB Microphone: You don’t need a functional sound card to use this; it is powered by and transmits data over a single USB cable.  (Any cardioid condenser mic is acceptable, if you don’t want to use USB, but this specific model comes highly recommended.)
  • An AT2020 Foam Windscreen Pop Filter: To protect against plosives from ruining your audio.  Most windscreens and pop filters are cheap, so there’s no reason to skimp on this.
  • RODE PSA1 Swivel Mount Studio Microphone Boom Arm: Good voiceover means being able to record comfortably and consistently while reading copy, and a boom arm facilitates this.  It also makes it easy to store the mic out of the way when not in use.

Some people are wary of getting a dedicated microphone that is only good for picking up voice, and want something more versatile.  A lavaliere microphone of sufficient quality and placement can make a big difference in recording your voice if you are in front of the camera (or even off-camera) and lets you move around while you present your content.  You may also be able to use a field recorder, such as the Zoom H4n, for multiple scenarios.  Corey from My Life In Gaming uses a Zoom H4n for his narration, and it sounds very good, I came up to this site http://epicgifts.net/15-pokemon-gifts-for-true-fans where I was able to find the best gifts for gamers online, specially for those who love pokemon go. Corey was playing this new castle escape game, it seem very addicting so check it out first. And because it’s a field recorder, he can record his voice as well as two other audio tracks and mix them in the recorder to save time, or store them individually for flexibility in the video editor. Recently My friend bought this Zockersessel and he loves it, it’s really comfortable. .

Good lighting can drastically improve any shooting situation

All cameras record light bouncing off of objects.  The more light that hits a digital camera sensor, the less work the camera has to do.  To illustrate this for yourself, shoot some video outside on a sunny day, then shoot some video inside at night with all the lights on.  Even though your electric lights are bright enough for you to see comfortably at night, you’ll notice that the inside footage will have more noise and grain than the footage shot outdoors.  This isn’t limited to high-end cameras; it’s a general principle that works for any camera, including the camera on your cell phone.  More light — as long as it isn’t a harsh spotlight or casting visible shadows — makes all video footage better.

The best compromise to getting better light in your shots is to use at least two bright photography/videography lights with some softboxes to diffuse the light so it doesn’t cast harsh shadows.  Clint of Lazy Game Reviews was kind enough to let me know what he currently uses, and also used previously when his budget was tighter:

  • For those on a budget, Cowboy Studio makes some affordable lighting kits appropriate for video production.  They’re on the cheap side, but they perform admirably.
  • Clint currently uses LED Panels.  These are more expensive, but give more much more flexibility in how you can use them:  Some panels can run off of batteries, some have adjustable color temperature, and overall they generate less heat.

He concluded with the following:

I typically use at least three lights when I’m filming. Usually 2 LED panels in front/side and then something overhead, like another LED light or even just the light in my room.

Practical advice.

Use manual focus and tripods wherever possible (even cell phones)

Autofocus isn’t perfect; it can have a hard time tracking a moving subject, or “refocus” randomly for seemingly no reason when both the camera and subject are completely still.  If you are shooting a stationary (or mostly-stationary) subject, use a tripod, and use manual focus on your camera.  You’ll get consistent results every time.  Amazon makes a cheap, perfectly functional tripod that is great bang for the buck.

“But I shoot everything with my cell phone!”, you cry?  Not to worry, there are cheap cell-phone tripod adapters available from just about everywhere.  And any smartphone made in the last 3 years allows you to fix the focus while shooting, usually by tapping or long-pressing the area of the shot you want to keep focus-locked.

If you have more money than skill, shoot in 4k

If you have a camera or cell phone capable of 4k recording, try shooting everything in 4k even if you are making a 1080p video.  Why?  Because the extra resolution of 4k allows you do fix all sorts of errors in post-production.  Didn’t zoom in enough?  Is your subject framed a little left of center?  Was the camera rotated slightly on the tripod?  Forgot to film a quick close-up shot?  You can fix all of this in the video editor, and even though the 4k footage is manipulated, it still has more resolution than your 1080p target as long as you don’t zoom past a 1:1 ratio (ie. 100%) of the original footage.

Putting it all together

Armed with the above advice, I decided to revisit making videos, and came up with this on a whim:

While the result is far from optimal, it represents a quality baseline that I am happy with and that I feel can steadily improve on.  Here’s how the video was produced, from complete start to finish:

  1. Thought about what I was going to do and how I could make sure I could get video and photos while doing it: 20 minutes
  2. Performed the activity, gathering media along the way: 6 hours (much of it spent waiting for the chemical process to happen)
  3. Wrote a script that contained everything I wanted to say in the narration: 90 minutes
  4. Recorded the narration: 60 minutes (I needed several takes before I felt I sounded like I wanted to, which was a balance between formal, informal, and natural)
  5. Imported all assets into a video editor, laid down the audio track, cut video to the narration, researched license-free images from the web to fill in the gaps for areas I didn’t have original footage for, and researched license-free music to use as a background audio bed: 4 hours
  6. Export + encode + upload the result: 1 hour

So yes, it took ~14 hours to produce a 5-minute video, but that was because I was inexperienced, and I also could have cut the time down by doing some production tasks while I was waiting for the chemical reaction to finish, like researching the background audio beds, writing parts of the narration, or tracking down graphics I needed.  Before you think that’s crazy, a lot of top-notch YouTube content does indeed take 10+ hours to produce a 10-minute video, even when people are experienced.  It all depends on whether or not you require a script to stay focused, or whether you are capable of winging it.

The most important advice multiple people gave me was this:  Make sure you’re having fun.  If it stops being fun, then you’ve turned your hobby into a depressing chore and ruined it, so back off or change direction if you start hating what you’re doing.

For more tips and inspiration, here’s some leisure viewing:

Posted in Digital Video | Leave a Comment »

An informal comparison of intermediate editing codecs for Windows platforms

Posted by Trixter on April 2, 2017

Last year I volunteered to record an entire weekend’s worth of vintage computer talks.  Unsurprisingly, this also involved editing an entire weekend’s worth of vintage computer talks.  All of the footage was 1080p60, and was shot or delivered compressed (a mixture of AVCHD and H.264 MP4).  While this saves space, it is not always fluid to edit with, as compressed codecs arrange frames into groups that are highly dependent on each other.  The end result is that seeking around and cutting highly-compressed footage can feel sluggish even on extremely powerful systems.  My system was beefy for 2012, and can still surprisingly hold its own in 2017:  A 6-core/12-thread Core i7-980X with 24G of RAM, and 4TB of spinning-disk storage made up of 3x2TB disks in a RAID-5 array (capable of delivering up to 700MB/s sustained read speeds at the outer edges of the platter) with a 4th hot spare.  Despite the high specs, editing 1080p60 H.264 footage last year was sluggish to the point where I considered using an intermediate codec.

A quick primer on intermediate codecs

Intermediate codecs are used when you want to transcode your footage to something that is much easier for the computer to decompress, making it much faster to edit with.  There are two basic types, lossless, and lossy:

Lossless codecs exactly represent each original pixel in the source, and can be used interchangeably with the original footage through any number of processing or compression passes.  (Think of them as running the footage through PKZIP or 7-ZIP, but using more lightweight algorithms more suitable for decompression speed.)

Lossy codecs throw more information away, but do so in a manner that looks the same to human eyes.  They don’t match up with the original source pixel for pixel, but most people wouldn’t be able to tell the difference.  The resulting file sizes are much smaller than lossless codecs, but because some information is thrown away during the compression phase, you have to be careful running such footage through multiple processing or compression generations as the image could degrade unacceptably.  (For the old-timers: This is like what happens when you make a copy of a copy of a copy of a videotape: The end result is noisy, washed out, and barely watchable.)

Compressing into an intermediate codec takes time, so it is usually done during the ingestion process (such as with DaVinci Resolve or Adobe Prelude), or in a batch process overnight, or both.

Our test case

I decided at the time to pick a 10-minute piece of 1080p60 footage and transcode it to every intermediate codec that met the following criteria:

  1. Worked in Windows without Quicktime (Quicktime for Windows was discontinued in 2016 due to security flaws).  This eliminated ProRes and Avid’s DNxHD.
  2. Over 5 years old, compatible with Windows 7 or above, and relatively battle-tested
  3. Evidence of recommendation online by video content producers (ie. something more professional than, say, people doing anime music videos)
  4. Worked in Premiere Pro without crashing or odd behavior
  5. Didn’t demand more than 200MB/s out of the I/O subsystem to maintain 60fps playback (this was more to keep filesizes down than anything else)

Additionally, since my source footage was not high enough quality to require HDR-level processing, I was realistic and considered codecs that had a YUV 4:2:2 configuration (and eliminated codecs that could only do 4:2:0).  This led to the following codec list:

To provide some interesting comparisons, I also included the following:

  • YUV 4:2:2 8-bit uncompressed, to see what was possible given a fast I/O subsystem
  • Cinepak (1991), possibly the fastest useful single-core decompression codec ever made
  • HuffYUV (2000), the first popular free lossless codec for Windows

After my 10-minute 1080p60 sample was compressed into these codecs, I then defragmented my RAID array using mydefrag with a script that put all of the codec comparison files at the fastest area of the array (the beginning).  I then measured decompression speed using VirtualDub’s File->Run Video Analysis Pass feature, then measured I/O and CPU usage using Windows 7’s Resource Monitor.

Before reading on, I want to stress that the above criteria were the only factors considered.  There are many other things to consider if you have different production targets or workflow needs, such as 10-bit or 12-bit color depth, or iterative recompression stability.  These vary from codec to codec and are not discussed here.

The results

Here are the raw results (you might need a non-mobile browser to see this table correctly):

Codec Size in bytes Percent size of uncompressed Data loss Visual quality loss Decompression Frames Per Second MB read per second Decompression %CPU usage (6-core 12-thread i7 980X)
UTCodec 4:2:2 8-bit 35,859,227,004 24% lossless 1 164 367 39
Cinepak 8,951,457,910 6% lossy 3 129 57 9
Cineform (Medium HD) 11,872,682,448 8% lossy 2 64 51 12
Cineform (High HD) 13,293,520,304 9% lossy 2 62 57 12
Newtek SpeedHQ 4:2:2 9,763,299,366 7% lossy 2 55 24 8
Blackmagic Design MJPEG 10,146,513,708 7% lossy 2 38 28 16
YUV 4:2:2 8-bit uncompressed 149,270,431,900 100% lossless 1 85 704 4
Lagarith 28,375,743,340 19% lossless 1 29 41 11
Grass Valley HQX Lossless 35,888,010,830 24% lossless 1 33 75 9
Grass Valley HQX Offline 3,988,818,666 3% lossy 3 90 19 8
Grass Valley HQX Online Standard 7,854,558,164 5% lossy 2 74 38 8
Grass Valley HQX Online Fine 9,833,655,952 7% lossy 2 69 45 8
Grass Valley HQX Online Superfine 27,810,744,334 19% lossy 2 48 86 9
HuffYUV (FFmpeg variant, left, adaptive huffman) 35,918,114,616 24% lossless 1 44 99 12

From this data, I eliminated codecs that were not capable of playing back 1080p60 footage at the bare minimum requirement of 60 frames per second.  I also eliminated Cinepak (which was only included for teh lolz) and Grass Valley HQX Offline, since the visual quality of these were unacceptable and I did not want to edit with proxies (I was not in the field on a laptop, but was on my actual editing desktop system).  I also eliminated YUV 4:2:2 uncompressed, because the storage requirements of transcoding everything to an uncompressed format was not practical.

From the remaining data, I concluded the following:

Playback performance (best to worst)

  1. UTCodec 4:2:2 8-bit
  2. Grass Valley HQX Online Standard
  3. Grass Valley HQX Online Fine
  4. Cineform (Medium HD)
  5. Cineform (High HD)

File size (best to worst)

  1. Grass Valley HQX Online Standard
  2. Grass Valley HQX Online Fine
  3. Cineform (Medium HD)
  4. Cineform (High HD)
  5. UTCodec 4:2:2 8-bit

Conclusion

For over a decade, I assumed that Cineform was the gold standard of intermediates, and was indeed the codec I used whenever I needed an intermediate workflow.  Recently in an online discussion, a colleague who is a professional cameraman and editor extolled the virtues of Grass Valley HQX.  HQX was not in my original comparison, so I added it and re-ran all of these tests in the same hardware configuration.  I was surprised to see that Grass Valley HQX was a hair better than Cineform in all areas.  I will definitely give it a spin for my next project.

To be absolutely fair, Cineform is (still) no slouch, and has served me very well over the years.  You could do much worse than either Cineform or Grass Valley HQX; either of them should suit your needs for an intermediate codec on Windows.

If you have a multi-core processor and lots of free disk storage, UT Codec remains the fastest lossless codec available on modern hardware.  However, evaluate your project needs before committing to it, as not every project needs 100% lossless compression.

Finally, if you only work with 24p/30p material, just about any of these codecs will serve you well.

Update: I’ve recently found that Cineform performs much faster if you can handle the quality the “Medium HD” preset provides, which can exceed the playback rate of GV HQX.  But they’re both good to use.

Update 2:  There is a wonderful, filterable list of intermediate codecs made available by David Kong.

Update 3: Cineform was open-sourced (woohoo!) and Virtualdub-filtermod already supports creating it using the official SDK.

Posted in Digital Video | Tagged: | Leave a Comment »

BASIC Memories

Posted by Trixter on March 15, 2017

I contributed a few segments to The 8-Bit Guy‘s retrospective on BASIC that covered my thoughts and memories on growing up with BASIC during the home computer revolution.

Making quality video is difficult!  I had to perform several different takes before I was comfortable, and I screwed up the focus, and I needed better lighting.  And I’m fat.  But overall it went ok.  Except for being fat.

I envy those who can do YouTube videos full time, since it seems like a lot of fun.  I’ve collected over a hundred topic ideas for videos I’d like to do someday, but if I’m being realistic with myself, I won’t have the time.

Posted in Digital Video, Entertainment, Programming, Vintage Computing | Leave a Comment »

Sony Xperia Z3v impressions and workarounds

Posted by Trixter on January 10, 2015

The Xperia Z3v is a very odd hybrid of a phone that is being marketed as a flagship for the current generation of smartphones; it was released in October 2014 and is a Verizon exclusive (other carriers have the older Z2 or Z3).  There is a nearly criminal lack of coverage in the media for this phone, so I thought I’d rectify that with my thoughts on the phone after two months of use.  First, some background:

We switched the entire family over from Sprint to Verizon (more expensive, but you get what you pay for) and part of the terms of the switch was that we all get new phones.  As I was a long-time Samsung customer (Epic 4G, Galaxy S4) I was planning on going with the S5, but wanted a few things the S5 couldn’t give me, like stereo front-facing speakers.  After reviewing everything Verizon offered that met my requirements, I decided to try the Xperia Z3v under the agreement that I could return it after 14 days if I wanted to switch to another phone.  Because the phone is best-in-class in a few areas, I’ve decided to keep it, accepting that a few aspects of the phone need workarounds.

The Z3v is a combination of the Z2 (larger, thicker body; slower CPU) and the Z3 (camera, screen).  It’s a frankenphone that only Verizon offers.  Let’s start by describing the basic features of the phone that drew me to it:

  • 20 megapixel camera sensor
  • 1080@60p and 4K@30 video recording
  • IP65/68 rating (dustproof, waterproof up to 5 feet for up to 30 minutes)
  • Front-facing stereo speakers
  • Dedicated physical camera shutter button
  • Wireless charging

(It has more features than these obviously, like remote PS4 playing, but these are the only ones that interested me.)  Sounds awesome right?  Well, it mostly is.  Based on my experience, here’s what “mostly awesome” means:

Pros

The camera.  As a point’n’shoot, the Z3v is one of the best I’ve ever had.  The 20mp sensor, coupled with firmware borrowed from the Sonty CyberShot line of cameras means that it shoots great automatic pictures, they look like the stock photos of EyeEm.  In default “auto” mode, which is what you get if you press the camera shutter button to wake up the phone and go straight to the camera app, it uses the 20MP sensor to oversample the scene and provide both free HDR shots and stabilization.  It is smart enough to turn off stabilization if it notices the camera is on a tripod, and tells you it is doing so with a small status indicator.  Actually, it’s smart enough to do all sorts of things that I won’t bother mentioning here — just know that the Z3v is good enough that I don’t carry a dedicated camera any more.  Is it a replacement for a DSLR?  No, of course not.  But it is definitely a replacement for a sub-$300 point’n’shoot.  The shutter button even performs a press-half-to-focus-lock, press-full-to-shoot function.

4k video.  Being able to shoot this is not terribly practical, but it does work, and you do see some additional fine detail that you don’t see in 1080p mode.  4K mode is useful if you can’t decide on the framing of a 1080p shot and you want the ability to crop without losing detail.  It works best outdoors; there’s no point in using it in low light.

It’s waterproof.  Will I be shooting underwater?  No.  Will I be secretly grateful that my accidental fumble of the phone into the toilet won’t completely destroy it?  Absolutely.

Active noise-canceling for free.  This feature isn’t advertised heavily, but if you purchase the custom “Sony Digital Noise Cancelling Headset (MDR-NC31EM)” earbuds for $45 and stick them in the phone, the Z3v will 1. recognize they are in, and 2. enable active noise-cancelling.  This works because the earbuds have little microphones on them that sample the environment, which the phone then generates the inverse waveform for in certain bands and mixes that into the output.  While the earbuds aren’t the most comfortable things to have in for more than an hour, the features does work well — better than noise-isolation earbuds which I’ve used for a decade — and I’m thankful to have them on my commute.  I haven’t noticed any distortion, but I listen to mostly spoken material on my commute.

Wireless charging.  With a cheap Qi charger, this simply works, which is great because the USB/charging port is normally behind a waterproof door you have to keep opening and closing when connecting via that port.

Battery life.  The battery life on this phone is simply amazing given what the phone is capable of.  I can go two days on a single charge, and that includes 3-4 hours of screen-on time per day.  If that’s not good enough for you, there are two classes of power-saving modes with multiple levels of customization, the latter of which shuts down everything except calling and texting and can stretch a single charge up to seven days.  Geezus.

Sounds too good to be true?  Well…

Cons

The 20MP mode is disappointing.  The camera normally shoots everything at 8MP.  If you want the true resolution of the sensor, you can enable 20MP in “manual mode”.  It works, and you have some customization over the shot, but it’s disappointing because the sensor and lens are small enough that there is no appreciable extra detail captured in the 20MP shot.  I’ve done comparisons with the phone on a tripod in a lot of light and there was just no advantage: I scaled the 20MP shot down to 8MP in photoshop and it didn’t look any better; I did a 100% crop of a few locations in both images and the 20MP didn’t have any more detail, mostly just fuzzier larger sections.  So, it’s sort-of useless, and I don’t use it.

The phone is slippery.  The front and back are glass, and the edges are a rougher plastic material.  The edges aren’t enough for me to keep a good grip on the phone at all times.

The native camera apps offer little customization.  If you want to shoot video under odd circumstances, or use the camera on a tripod to take nice stills, the native camera app — even in manual mode —  lacks a few things that make it difficult.  There’s no true manual focus or manual white balance.  You can pick focus “types” and white balance “types” but the focus and exposure are always auto-adjusting.  And the 4K video mode offers no customization whatsoever; it’s 100% auto-adjust.

60p isn’t really 60p.  For some inexplicable reason, the camera shoots video slightly slower than 59.94 or 60fps which are the only framerates considered 60p broadcast standard.  Out of several tests, the resulting videos had variable framerates, all nonstandard, like 59.32 and 58.86.  This leads to slightly jerky video over longer lengths of time, and can cause issues editing in post.  One video I shot and uploaded directly to YouTube without editing shows up as “1080p59”.  (The 30p video modes were all 29.97 like they’re supposed to be, so that’s good at least.)

4k video mode overheats the phone.  Seriously.  The first time you use it, you’ll get a pop-up saying that if the camera overheats, your video will be stopped, saved, and then the camera will shut down to cool off.  Sure enough, it does all that after about 5-7 minutes of 4K video shooting.  This, coupled with the 60p framerate issue noted above, seems very bubblegum-and-shoestring to me.  But, good luck getting those fixed, because:

Frankenphone = orphan.  The Z3v was the result of a partnership between Verizon and Sony; it is a hybrid of the Z2 and Z3.  As a result, neither company will fully support the phone.  I’ve tried to report the firmware bugs noted above to both companies, and both companies tell me to contact the other company.  Sony tells me that Verizon supports the phone exclusively, and Verizon tells me that any firmware bugs in the camera are the responsibility of the manufacturer.  Which really sucks, because:

Playing with the alarm volume can lock up the phone.  If you adjust the volume of an individual alarm a few times, then this hilarious thing happens: When the alarm time comes, it does not make noise but instead locks up the phone.  You have to mash and/or hold down the power button to get out of this state until the phone eventually reboots.  I was late to work one day because of this.  It would be nice to be able to report this bug to someone, but oh well.

The front-facing speakers aren’t as loud or clear as they could be.  My son used to have an HTC One M7 and his audio was louder and clearer than on the Z2v despite the hardware being almost 2 years older.  It’s not bad; just don’t assume it’s a replacement for good headphones.

The Stock youtube app doesn’t allow pre-downloading.  This feature was removed by YouTube at some point, angering hundreds of thousands of commuters, myself included.  I used the stock YouTube app on my Galaxy S4 for a full year for this reason so I could predownload videos in my “Watch Later” list to view on the train, and the Z3v app is fully updated and doesn’t allow caching of videos.

These were initially very big disappointments and I almost returned the phone because of them.  After some research, here’s how I mitigated them:

Workarounds

Slippery: The Verizon store had a $20 cheap flexible plastic case that I put on it just to make it less slippery until I found something else.  I haven’t found anything else, so it’s still on there.  I tried carbon fiber stickers; while they looked nice, all they did was make it more slippery.  Trying to search Amazon or other stores for “Xperia Z3v case” doesn’t work well because you keep getting results for the Z2 or Z3, both of which have different dimensions than the Z3v.

Lack of manual camera options:  I found that OpenCamera works with my phone and supports locking focus, white balance, and exposure.  This allows me to shoot videos in very odd conditions, such as a reflective glass computer CRT that emits colored light.  It doesn’t support the 60p or 4k modes of the phone because those are manufacturer-specific and have no exposed API.

Odd 60p videos:  moveyourmoneyproject.org created this script to “fix” 60p videos so that they can be edited in post-production without causing problems:

A = FFAudioSource("MOV_0001.mp4")
V = FFVideoSource("MOV_0001.mp4")
AudioDub(V, A)

# Force compliant framerate (will adjust audio rate to match)
AssumeFPS("ntsc_double", true)

# Resample adjusted audio rate back to 48KHz
SSRC(48000) #if crashes, use ResampleAudio() instead

Inability to pre-download YouTube videos:  TubeMate now provides that function.  It’s clunky and buggy, but it works well enough to keep my commutes from becoming too boring.

Alarm volume adjustments lock up phone:  Adjust the alarm volume using the Settings->Alarm path instead.  Whatever you set it to, all new alarms will inherit, and you can adjust that all you like without consequences.

Conclusion

I think it’s a great phone if the above Cons don’t affect you and you’re looking to join Verizon and get a new phone before April 2015.  (After April, I believe the new Samsung is coming out, and it remains to be seen how that compares.)

Most people will use the phone on full auto, and it is very, very good at that.  Just don’t expect manual fine-tuning of a few things.

Posted in Digital Video, Technology | 1 Comment »

Cyberpunx

Posted by Trixter on October 5, 2014

October is “National Cyber Security Awareness Month”, whatever the hell that means.  In recognition of this dubious designation, I’ve made an HD remaster of the 1990 documentary Cyberpunk available.  Consisting of interviews with William Gibson, Jaron Lanier, Timothy Leary, Vernon Reid (from Living Color), and Michael Synergy, and briefly featuring a few industrial bands such as Front 242, Manufacture, and Front Line Assembly, the documentary provides a look at what the cyberpunk movement was circa 1990.  Subjects such as cyber-terrorism, cybernetic implants/enhancement, virtual reality/telepresence, and general counterculture rebellion against “The System” are touched upon.  Inevitable comparisons with Akira are made.

Here Be Dragons

While the producer and director did an admirable job making the source material interesting and presentable to the public, there are a lot of flaws with the documentary.  Some are minor and can be overlooked, such as the 1990s trend of inserting faux computer graphic overlays (to try to make the material more similar to the world Gibson painted in Neuromancer).  Many of the problems are with pacing; there are entire sections that focus on a particular subject for too long, sometimes without impact.  One section in particular goes so long that different digital effects start to fade in and out after a few minutes, almost as if the editor was bored and resorted to doing something with the image to keep the viewer’s interest.

There are also some very misrepresented facts and predictions, but it’s not really fair to criticize a documentary for failing to predict the future correctly.  That being said, there are some real howlers in here, from the supposed power hackers wield(ed) against governments, to the silly, amateur computer graphics that obscure hackers’ identities, to the heavily hinted-at concept that Neuromancer itself was responsible for shaping technology and history.  The most egregious is equating hacker with cracker (although, to be fair, that’s happened multiple times before and since).

A special mention must be given to Michael Synergy, who perfectly embodies the huckster who started believing his own bullshit.  Some of his claims in the documentary are so utterly, patently ridiculous, so incredibly pretentious, that it takes a great deal of willpower not to scream at him when he’s talking (especially when he mispronounces the word “genre”).  Were I him, I would have wanted this stage in my life to disappear, and it seems as if that wish has come true: His moniker disappeared with the 1990s.  My personal wild speculation is that once the real, actual revolution of the web occurred and it was able to finally call him out, he quietly exited stage left.  (Last I heard, he worked for Autodesk in the mid-1990s, was going by his birth name again, living in Hawaii, working in IT; if anyone has a real update, I would love to know what actually happened to him.)

Most depressingly, there is a real missed opportunity with how Jaron Lanier’s involvement was portrayed.  In the documentary, he comes across as a stoner who only mentions VR, which is a shame because — then and now — he’s the most relevant and accurate representation of a hacker that the documentary includes.  Of everybody interviewed, Jaron is the only person who is still exploring these concepts and ideas, and more importantly their unintended fallout, which you can read about in his most recent book Who Owns The Future?.  (Even if you don’t buy the book, follow that link and read the Q&A to get a feeling for his concerns.)

Worth watching?

While it may be hard to sit through, the documentary retains glimpses of the innocent, wildly-optimistic, techno-hippie idealism that grew with the rise of personal computing and networking.  For that nostalgia factor alone — the time when the Internet existed but the World-Wide Web did not — it’s worth an hour of your time.  It’s also worth watching to catch which ideas were especially prescient, such as:

  • Whoever holds the most information holds the most power
  • Every device will be interconnected
  • Physical boundaries will not impede meaningful communication
  • People will be individual, mobile, uncensored “broadcast stations” (considering I can post to youtube from my phone, I’d call this a reality)
  • The “matrix” as a concept and/or allegory for God (later realized almost to the letter in The Matrix movie trilogy)

…and so on.  You could make an interesting drinking game out of catching which ideas succeeded (although you’d get more drunk, quickly, by catching all of the stupid and inaccurate comments).

Cyberpunk: The Documentary is now available at archive.org.  Grab the MPEG-TS file if able; it’s 60p, Blu-ray compliant, and won’t take up too much space in your memory implant.

Posted in Digital Video, Entertainment, Technology | Tagged: , | 1 Comment »

8088 Domination Source and Encoder Released

Posted by Trixter on August 11, 2014

I’ve formally released the source and binaries for the 8088 Domination encoding system under its original working title: XDC (stands for X86 Delta Compiler).  Head on over to x86dc.wordpress.com to browse the github source, grab some example videos, browse the documentation, or watch a screencast where I encode a video from farm to table in under 30 minutes.

Now you too can impress your friends with your own custom videos that run on a 4.77 MHz CPU with 16K of video memory!

Posted in Demoscene, Digital Video, Programming, Vintage Computing | Leave a Comment »

Out, damned bug! out, I say!

Posted by Trixter on July 29, 2014

The response to 8088 Domination was warm, wonderful, and widespread. To everyone who dropped me a note via twitter, email, or youtube — and there were thousands of you — I want to thank you for the kind and encouraging words.

Even before I finished the design, I knew that I was going to release all of the source, so that others could make their own videos for their own vintage systems. I was careful to design the system to be easy to understand, so that it could be easy to port to other languages or extend with new features. I have a lot of comments in the code, some fairly verbose, so that there is no confusion why something is designed a particular way, or why one operation happens before another. I want this to be representative of the quality of code I usually write.

So, why am I overdue in releasing the code? Bugs! Or, more accurately, edge cases. To ensure that the encoder works properly in the real world, I’ve been testing it with vastly different sources: Animations, music videos, cartoons, even a full-length movie. And almost every time, I encounter a new edge case that needs fixing. Oh, don’t worry — The code isn’t full of special cases or bubblegum-and-shoestring workarounds. It just takes time to address each issue that crops up, and determine if it’s a true bug that needs fixing, or an issue that can be safely ignored.

“Ignore issues in code? Impossabru!” Actually, here’s an example of what I mean: I discovered a few weeks ago that I could improve the efficiency of the output a few percent by re-running some optimization phases before final compilation. However, doing this will sometimes create a small “empty” 1-byte delta that actually isn’t a delta (ie. the locations contains the same data in the previous and next video frames). It’s a bug, but is it worth fixing? I could spend days rewriting the optimization phase into a gigantic, monolithic procedure where all parts coordinate… or, I can throw these 1-byte non-changes away at the end of the existing optimization phase. You can guess which path I chose.

Some bugs are indeed bugs, and they must be fixed before I put my name on the code. For example, the bug that forced the encoding loop into a deadlock, or the bug that randomly produces black flashes in the output (still working on this one), or the bug whose generated code forgot to set a single register which prevented videos from being played without a soundcard present.

So, I hope everyone understands why the code release is late. Well… one of the reasons it is late. The other reason is that making your own videos will require some documentation (some user-directed preprocessing of the source video is necessary — sorry!), and a video showing the steps involved couldn’t hurt either, so that will require a few days by itself.

While you’re waiting, why not help me decide what movie to convert and release with the final distribution? In keeping with the spirit of the time period, I’m going to convert an entire full-length movie using the system, and ensure that it will fit onto a single CD-ROM so that users without homebrew XTIDE controllers can hook up a SCSI CDROM drive and enjoy the flick (ironically). The defacto example for this kind of thing is Star Wars, although I’m partial to TRON, as it was released after the IBM PC itself was and has its own share of iconic sequences. But, I’ve already done TRON to death, so what would you like to see? Vote in this handy poll, and if the movie you want to see isn’t there, please write your choice in the comments.

Posted in Demoscene, Digital Video, Entertainment, Programming, Vintage Computing | 6 Comments »

8088 Domination Post-Mortem, Conclusion

Posted by Trixter on June 20, 2014

This is the second (and last) part of my write-up on how 8088 Domination was achieved; the first part is here. I reuse some terminology introduced in the first part, so before we continue, it’s worth reviewing some key definitions to avoid confusion:

Delta: An area of bytes that needs to change in screen memory to update the displayed image
Slice: A delta where not all bytes are the same value
Run: A delta where all bytes are the same value

On to the nitty-gritty!

Breaking With Tradition

If you’ve coded video or animation systems in the past, you may have correctly identified what I’m doing as less of a video codec and more of an animation system. Animation systems from the 1980s such as Autodesk Animator or DeluxePaint Animation store and play back deltas by iterating through data that describe what areas of screen memory to change, using codes and chunk types for things like “skip N pixels forward, then change M pixels”, “fill entire screen to solid color N”, and so on. This reduces the size of the file, but requires some decision-making and branching while iterating through the data.

I initially did the same thing, and wrote a fast routine that would iterate through a list of deltas to replay, handling runs using the efficient REP STOSB sequence, and the rest with REP MOVSB. It looked something like this:

Delta format:
0-1: start offset
2:   length in upper 7 bits and run/slice in LSB. If set, is run.
3:   fill value (if run; unused if slice)
4-N: data (if slice)

Decompressed via:
; Prior setup:
; DS:SI=source data
; ES = destination (screen RAM)
; DX = number of deltas to iterate through

@loopit:
    lodsw       ;load offset
    xchg di,ax  ;get destination ready
    lodsw       ;load bit, length, value
    shr ah,1    ;isolate LSB?
    mov cl,ah   ;move length into place
    jc @run     ;if so, it's a run
                ;runs are the exception; slices should fall though first
@slice:
    rep movsb   ;copy slice to screen
    ;okay to fall through here since cx=0, rep stosb will do nothing
@run:
    rep stosb   ;replay run to screen (AL already has value)
@continue:
    dec dx
    jnz @loopit

This is optimal 8088 code for this operation, but the idea has two problems. First is a minor annoyance; a byte is wasted storing a fill value even if we aren’t handling a run. But the real problem is that there are two branches (JC and JNZ) for every delta we iterate over in the list. Branches are costly on almost all CPUs, even those as old as the 8088. This was a huge concern for me, as the average new image in my test material was made up of roughly 600 deltas, most of them slices. Some quick math to illustrate why this concerned me:

# of cycles available to us to paint a frame: About 40,000
# of cycles taken up by one untaken (JC) and one taken (JNZ) branch: About 24
# of cycles used by branches to replay 600 deltas: 14,400 (36% of our total)

So, in a typical frame full of changes, more than a third of our available CPU time is wasted handling branches. In a system where we have the same time quantum as 8088 Corruption but are trying to change more data than it did, this was a big step in the wrong direction!

I thought of a few ways to mitigate this cost, such as unrolling the loop, rearranging deltas so that slices and runs are grouped together, and so on. This went on for about an hour before inspiration struck: Why not eliminate the branches altogether?

And just how the hell do you do that? By generating code instead of data. Instead of having the encoder spit out a description of what changes need to be made each frame, we switch to generating x86 opcodes that, when executed, directly implement the changes themselves.

This is the same strategy used to accelerate sprite plotting on many architectures, but when I realized I’d be doing the same thing for the entire screen, I started laughing out loud. What a ludicrous idea! And yet, in practice, you can see that it works.

It’s A Compiler!

The basic structure of a video “code” frame looks like this:

Startup code: Sets ES to point to the screen and DS:SI to point somewhere below its own instruction stream to where the data stream starts
Instruction stream: Instructions that re-point DI to new screen memory destinations and then issue combinations of MOV, MOVSB, STOSB, REP MOVSB, or REP STOSB to change screen memory contents
Cleanup code: A single RETF instruction to return to the caller
Data stream: For (REP) MOVSB, data that gets moved to areas of screen memory

As long as the code is aligned to a DOS 16-bit paragraph boundary, it will execute properly, so the player code enforces alignment of the frame data to paragraph boundaries. Not doing so results in hilarity, as the correct screen memory locations will be changed properly, but with data from the wrong place:

This is supposed to be an anime girl, not digital vomit

This is supposed to be an anime girl, not digital vomit

(It is, of course, quite possible to rewrite a few values in the code to get it to execute properly wherever it is located, but I didn’t want to perform code fixups realtime at 60hz — the system is already slow, let’s not make it any slower.)

Because the instruction stream adds size and processing time to the video data, it’s important for us to generate optimal code that is as fast as possible without being too large. For example, if you want to set a single byte in an arbitrary location pointed to by ES:, most x86 asm programmers would do it like this:

ES: MOV BYTE PTR [nnnn],val

This is fast and is 5 bytes in size. But if you have your value pointed to by DS:SI, you can also do it like this:

MOV DI,nnnn
MOVSB

This is also 5 bytes (4 opcode, 1 data) but is slightly slower… but because MOVSB advances DI automatically, it can save you from having to do the same thing manually. For a single byte it’s not a win, but what if we have three bytes to change? We can continue to set them directly:

ES: MOV WORD PTR [nnnn],mmmm
ES: MOV BYTE PTR [nnnn],mm

…or do this instead:

MOV DI,nnnn
MOVSW
MOVSB

The latter method is much smaller and slightly faster. (This can go on for a while, but eventually there is a break-even point where switching to REP MOVSB is faster than all other encodings.)

Although I had worked out most optimal combinations for various slice and run durations, in the end I felt it was better to just have the compiler generate every code variation, calculate how many cycles each one took to execute, and pick the fastest one. (I figured it was safer and more future-proof than me trying to hand-optimize generator output.) Calculating cycle counts for the 8088 is almost as easy as it is for 8-bit CPUs; the 8088 has only one core, no cache, no threads, no pipelines, no out-of-order execution… it does have a prefetch queue, but it is only 4 bytes long so it isn’t very effective. The major factor in optimizing 8088 code for speed is minimizing memory accesses, because the CPU takes 4 cycles to read (or write) a byte — any byte, even instruction stream opcodes. So, in most cases, the smallest code usually wins. The only exceptions to this rule are instructions that take an extremely long time, such as MUL/DIV, which can run for over 100 cycles depending on the operands.

Andrew Jenner, a good friend and a better coder than I am, has an excellent rule of thumb for determining 8088 code execution speed: Multiply the size of the opcode and the size of any data touched by that opcode by 4 for an informal cycle count; then, also determine the sum of each instruction’s published cycle count. Whichever number is larger is the more accurate execution time estimate.

I won’t go over the code generator itself in this write-up because it is very mundane and not terribly exciting; refer to the source code when I release it in a few weeks.

Delta Optimizations

Once I had an idea of the code generation cost, I came up with a couple of ways to reduce that cost by manipulating the delta list before it was processed by the encoder. Less work for the compiler to do meant smaller code/data and faster execution. Delta optimization consists of four phases:

  1. Run Identification and Splitting. Because runs process faster and encode much smaller than slices, it is a huge win to identify any runs hiding inside of slices and split them out into their own delta. This phase also marks any runs it finds as excluded from further processing (“frozen”), as runs are already optimal.
  2. Pixel “Shaving”. Changing only a single byte in screen memory has a very high cost (5 opcode bytes, plus the time they take to execute) so pixel “shaving” looks at each single-byte delta to determine how many pixels are actually changed by the byte. If a particular threshold is not met (ie. “more than one pixel”), the delta is dropped completely. This is a user-configurable option and is off by default.
  3. Delta “Shaving”. Identical to pixel shaving, except entire deltas are dropped if they aren’t large enough. The default threshold is “more than two bytes”; anything smaller is dropped. This is also user-configurable, and also off by default.
  4. Delta Combination. This phase looks for deltas that are spatially close to each other in linear memory and combines them together if the end result would encode as less bytes. For example, assume we have three 1-byte deltas all spaced one byte apart. Also assume that replaying these three deltas costs 5 bytes each, for a total of 15. Now consider what happens if we combine all three deltas into a single delta spanning the three changed bytes: The number of bytes changed onscreen will grow by 2, but we shed 10 bytes because we only have one delta to set up and replay. It is a net win, so it is always worth it to try to find combination opportunities. (This is technically an NP-hard problem, and implementing it quickly and stupidly as an exhaustive search greatly slowed down the code. I optimized it by re-sorting the deltas by start offset, so that the search space is localized around the delta(s) being examined. After all the combinations are found, the deltas are re-sorted back into the order that assists the encoding phase, as described earlier in part 1.)

All of these phases reduce size and execution cost. The pixel shaving and delta shaving phases have the added benefit of cleaning up the source video a little; if a pixel is “shimmering” over time due to being right at the error threshold of the dithering process, it will be “stable” with pixel or delta shaving turned on. The drawback to the shaving phases, however, is that the output can leave “trails” onscreen as smaller changes are never fully cleaned up or overwritten. Use with caution.

(While not benefiting optimization, there is also a prep phase that executes before the other phases and performs oversize delta splitting, which takes deltas that are too large to execute within our available cycle pool and breaks them up into smaller deltas. This is always necessary when the entire screen changes, as this creates a delta list that consists of only one single delta with a start offset of zero and an end offset at the edge of screen memory. A delta that big is way over both the available byte and cycle limits, so it has to be split into smaller chunks to be replayed over multiple passes.)

Playing With Variable Packets

The player for the 8088 Domination FMV data is very similar to 8088 Corruption: By controlling the size of the audio data the soundcard interrupt handles, we can get the interrupt to fire at our desired video framerate and use the opportunity to update the video as well. The interrupt handler pulls data out of a queue and updates the screen at the same time it updates the audio. While the interrupt is firing in the background, a foreground loop is constantly reading from disk and putting data into a queue. I cover this in more detail in 8088 Corruption Explained, so if you have a half hour to kill, I highly recommend snagging the MPEG-2 file (best quality) and watching it.

Where the players differ, however is in two areas:

  1. Instead of moving video data to screen RAM, the Domination player CALLs the video frame code, which executes and then returns
  2. The read-and-fill-memory loop, as well as the interrupt handler pointer management code, deals with variably-sized video+audio packets; this is because the output of the encoder varies in size based on how many changes are present from frame to frame

Two changes were made to the muxed video+audio stream for Domination that not only enabled handling variably-sized packets, but also sped up disk reads. The first change was to align each packet of video+audio data to disk sector boundaries, which sped up disk reads due to the way DOS handles buffering: DOS will normally transfer disk requests into its own internal buffers (if you’ve ever wondered what the BUFFERS= line in CONFIG.SYS was for, now you know) and then copy to the caller’s buffer. However, if the caller requests reading a sector-aligned offset (and amount) into a normalized paragraph-aligned pointer, DOS is smart enough to instruct the BIOS to transfer the data directly to the caller’s buffer. This made disk reads return a little quicker, as DOS’s usual double-buffering step was avoided.

The second change to the player was to keep track of how large each video+audio packet was. Rather than put size headers before each chunk, or scan the entire file before starting to determine sizes, I chose to write an index to the end of the file stream. The index consists of one byte per video+audio packet, where each byte indicates the size of the packet in sectors; this is possible because each packet is guaranteed to be aligned to sectors. (This limits the size of each packet to (255*512) = 127.5KB, but we will never see a single frame that large in practice; in fact, we will never see a packet larger than 64KB because that is the 16-bit real-mode segment size limit.)

The most amount of time I spent enhancing the player for the Domination format was, to my surprise, the memory handling. The original player used a circular buffer (a FIFO queue) that fed disk data into the “head” while the interrupt handler grabbed data from the “tail”. Typical circular buffers are divided into fixed-size blocks, but I had just switched to variably-sized packets. I needed a FIFO that could:

  • Accept variably-sized allocations/deallocations
  • Manage a linear area of memory with wraparound without requiring that area to be a power-of-two length
  • Be interrupt-safe (ie. no locks/mutexes/semaphores required to put data in or pull data out)
  • Always return paragraph-aligned normalized pointers
  • Perform all of the above while dealing with 16-bit real-mode segment limitations

In the end, I wrote code that was not so much a typical circular buffer, but more of a heap manager where the caller “reserves” an amount of memory, receives a pointer to an area they can use, fills it with data, and then “commits” the area. Prior commits can be retrieved in FIFO order using another function. The design is interrupt-safe because reserves and commits don’t touch the data structures that are used by retrievals, and vice versa. I know it sounds stupid to be proud of a data structure, but I was pretty pleased with myself when I finished implementing it. (I’ve since learned there is a similar construct out there called a bip buffer, but a bip buffer wastes more memory and time than what I came up with.)

In Search Of Test Material

With two fully-functioning video modes supported by the encoder, I now had to choose some test material to show it off. For the color portion, I decided to use exactly the same footage that I’d used 10 years earlier with Corruption, so that people could directly compare them and see for themselves that the newer method was better overall. For the B&W portion, I had difficulty picking material; I was about to go with one of the old Apple silhouette-style ipod/itunes commercials until I saw a Game Sack episode where a homebrew Genesis cart was shown playing the Bad Apple animation. I was hooked — it was perfect test material. High-contrast shadow puppetry lent itself very well to my “animation compiler” because, most of the time, very little is actually changing per frame, and what is changing has very clean residuals.

Finding a clean source of the Bad Apple animation proved more difficult than I thought it would be. Every youtube version had resizing or transcoding artifacts, so after some research I found the original japanese video-sharing site it originated from and grabbed it from there, which resulted in an extremely clean 30fps source to work with.

Conclusion

8088 Domination may be my best work; I’m very proud of the result. I had to think creatively and unconventionally to solve problems. If people are considered artists based on the uniqueness and viewpoint of their output — paintings, novels, musical works — then I’d like to think programmers can be artists too, and judged by the same conventions.

I want to fix a few bugs in the source and tidy it up, and once I’ve done that I will release the source, executables, and documentation so that you can create your own videos using the system. Until then, enjoy a video of the competition I showed it at, complete with audience reaction:

Posted in Demoscene, Digital Video, Programming, Vintage Computing | 17 Comments »