Oldskooler Ramblings

the unlikely child born of the home computer wars

Vintage DOS Memory Conventions Explained

Posted by Trixter on August 20, 2018

Confusing expanded (EMS) memory with extended (XMS) memory happens all the time in the vintage PC/DOS computing hobby for those new to those terms.  Which is which?  What does my hardware support?  Do I need them?  This article will attempt to explain, as succinctly as possible, what each “type” of PC memory is and how it is used.  This article is not an extensive technical dive, but is meant to be an explanation of DOS memory conventions for the novice DOS user who is new to using DOS.

Protected Mode memory (DPMI, VCPI, DOS extenders, etc.) are not mentioned in this article because they typically don’t require any user intervention to work (ie. DOOM comes with DOS4GW.EXE and it just loads and works automatically).

Terminology used in this article

PC: Any IBM PC  or compatible system from the 1980s and 1990s running DOS.
8088: Shorthand for the entire 8086 CPU class, which includes the 8086, the 8088, the NEC V20, and the NEC V30.
640KB: The typical limit of memory DOS can access.  (Exceptions for DOS to access higher than 640KB exist, but are not mentioned to keep this article understandable for the layman.)

Quick history of memory on the PC

The 8088 CPU in the first IBM PC could access up to 1MB of memory.
Later, PCs built with the 80286 CPU could access up to 16MB of memory.
Later still, PCs built with the 80386 CPU could access up to 4GB of memory.

All PCs can directly access the first 1MB of memory.  Accessing memory beyond the first 1MB requires functionality only 80286 and later CPUs have.  Memory that extends past the first 1MB is called Extended Memory.

Because 8088 CPUs can’t access Extended Memory, special memory boards were created for them that fit into their system expansion slots.  These boards contain extra memory visible somewhere in the first 1MB, where the 8088 can see it.  The memory provided by these special expansion boards is called Expanded Memory.

Memory types in detail

EMS, XMS, conventional memory, UMBs… what does it all mean?  Here are some definitions:

Conventional Memory

What is it?  The first 1MB of memory visible to all 8088 and higher CPUs.  DOS is loaded into the area from 0-640KB and manages it.  The area above that, from 640KB to 1024KB (1MB), is typically used by the system hardware and not generally available to DOS programs.
How do DOS programs use it?  DOS provides function calls for allocating, resizing, and deallocating memory blocks for program use.  (Alternately, programs can directly access any portion of the first 1MB of memory they want to, although doing so can be risky if the programmer is inexperienced.)

Extended Memory

What is it?  Memory visible beyond the first 1MB, usually physically located directly on the motherboard, but sometimes provided by adding memory cards.
How do DOS programs use it?  Via an API called the eXtended Memory Specification, also called XMS.
What provides XMS?  XMS is provided by a driver called HIMEM.SYS, loaded by CONFIG.SYS at boot time.
How does XMS work?  XMS provides function calls that DOS programs can use to copy data between conventional memory locations and extended memory locations.

Expanded Memory

What is it?  Memory provided by special expansion cards.
How do DOS programs use it?  Via an API called the Expanded Memory Specification, also called EMS.
How is EMS provided?  Via a driver provided by the expansion card manufacturer.  Each card has a different driver and you must use the correct one for your card.  The driver configures the card for use, and provides EMS function calls that DOS or programs can use to access the memory on the card.
How does EMS work?  EMS boards provide a small “window” into the memory they contain, and this window is located somewhere in the first 1MB of RAM where 8088 CPUs can access it.  To access more memory, programs issue EMS function calls that “move” the window to a different area on the card, changing what portion of the larger memory shows up in the small window.

A quick note about EMS:  If you want to run a program that uses EMS, but you don’t have an EMS board in your computer, don’t fret: You can emulate EMS on any 80386 or higher.  This is done by loading a memory manager such as EMM386 or QEMM.  One of the services provided by a memory manager is to section off a portion of Extended Memory and present it in response to EMS function calls, like a real EMS board would.

Upper Memory

What is it?  The upper portion of Conventional Memory located between 640KB and 1024KB (1MB).  There is typically no user-accessible memory in this area, but 80286 and higher systems can relocate portions of Extended Memory into this area for use by DOS programs.
What is it used for?  The total amount of upper memory available is typically very small, between 64KB to 128KB, but this amount can still be useful for loading small memory-resident programs or drivers outside of Conventional Memory.
How do DOS programs use it?
  XMS provides functions for allocating and deallocating memory in this area in units of “blocks”, called Upper Memory Blocks, or UMBs.
How are UMBs provided?  For 80386s and higher, DOS comes with HIMEM.SYS and EMM386 which, loaded together, provide UMB functionality.  For 80286s and lower, there are programs (such as QRAM or USEUMBS) that use “shadow ram” functionality provided by the 80286 system’s chipset to perform the mapping.
How does upper memory access work?  Once a program is provided an upper memory block via a UMB functional call, it can be accessed the same way conventional memory is accessed, ie. directly.

Guidelines to running DOS programs

  1. First off, simply try to run a DOS program.  Most run without needing any special memory configuration.
  2. If you have an 80286 or higher, always load HIMEM.SYS in your CONFIG.SYS file.  There’s no harm in installing it even if programs don’t use it.
  3. If you want to load a single program on a 386 or higher to manage all of this for you, install QEMM.  QEMM will manage all of your memory and provide your programs with either XMS or emulated EMS based on what each program asks for.
  4. Finally, if a program claims to support both EMS and XMS, choose EMS.  EMS is faster than XMS.

Posted in Vintage Computing | 3 Comments »

Using a Sound Blaster in a PC/XT-class system

Posted by Trixter on August 3, 2018

I’m asked this a few times every year: Can you put a Sound Blaster in an IBM PC, PC/XT, or otherwise 8088/8086-class system?  If you can, is there anything that can use it on that class of hardware?

The quick answer is yes and yes:  As long as there are no hardware conflicts, you can put any of these:

  • Sound Blaster 1.0
  • Sound Blaster 1.5
  • Sound Blaster 2.0
  • Sound Blaster Pro
  • Sound Blaster Pro 2.0
  • Pro Audio Spectrum 8
  • Pro Audio Spectrum 8+
  • Pro Audio Spectrum 16
  • Thunderboard

…into any system and it should work fine.  You’ll be able to use the onboard joystick port, and have Adlib/FM sound effects and music.

The longer answer has a caveat: There are less than a hundred sound-enhanced games that will run, and less than 20 of those will use the digital sound channel.  The Sound Blaster was commercially available to consumers in 1989, which was right as the 8088 era was coming to a close.  Only a handful of games were produced around that time period that supported the Sound Blaster and could still run on 8088-class hardware and supported the CGA graphics typical of 8088-class systems.

But, if you have a souped-up NEC V20/NEC V30 XT, you’re feeling adventurous, and you really want to hear some sampled (“digitized”) sound come out of your system, you can try running these:

Commercial games:

1) Used the Activision OmniMusic driver. There might be more games compatible with 8088+CGA that use this driver.
2) Need the rare “16-color” version of this game which supports CGA 640×200 2-color mode

Shareware games:



There might be more than the above, but this is all I can remember personally testing.

For possibly much more comprehensive information on this subject, you can do no better than to check out Great Hierophant’s Nerdly Pleasures blog, which performs deep technical dives into these and other subjects.

Posted in Demoscene, Gaming, Vintage Computing | 13 Comments »

Public Service Announcement: Do Not Use The eBay Global Shipping Program

Posted by Trixter on July 31, 2018

If you sell rare, unique, or otherwise irreplaceable items on ebay, do not use eBay’s Global Shipping Program.  Doing so grants eBay’s partners the right to effectively take your item and resell it without your knowledge.  What follows is an explanation of what the Global Shipping Program is, why it exists, and why you shouldn’t use it for items that are difficult or impossible to replace.

Shipping internationally is a hassle in the USA: It requires multiple forms, a declaration of value, and for anything over a certain size, interaction with a shipping service employee.  However, if you’re selling rare items with global interest, such as vintage computers or software, dealing with international shipping is a necessary evil.  You could always not ship internationally, but that cuts out a large section of your audience and potential profits.

To try to ease the pain of international shipping for USA sellers, ebay introduced the Global Shipping Program (“GSP”).  The GSP allows a USA seller to market to international buyers, but then ship to a central USA domestic address.  Once received, ebay then handles delivery of the item to the international buyer.  The benefit to both parties is straightforward: The buyer gets access to more sellers that will ship to their country, and the seller only has to pay for a domestic delivery and avoid international shipping hassle.

The central USA address facility is subcontracted out to Pitney Bowes  (“PB”).  PB is a business, and they need to make money too, so they collect many packages to a single destination country and then ship them all off at once in a single freight shipment.  (This is usually facilitated by repacking items into smaller boxes so that PB can fit more items per shipping container.)  Large freight shipments are much cheaper than shipping packages individually, so the difference between what the international buyer paid, and what the eventual shipping cost is, becomes PB’s profit.

What most people don’t realize is that eBay’s agreement with PB allows PB to steal your items and resell them.  Worse, they get caught doing it all the time.

The Global Shipping FAQ contains this language:

What happens to lost, damaged, or undeliverable items?

GSP items purchased by your buyer may be covered by an eBay Money Back Guarantee or PayPal Purchase Protection program. eBay and Pitney Bowes shall have no liability and shall have, in their discretion and in any manner that they prefer, the right to dispose of or liquidate parcels (and their contents) that eBay or Pitney Bowes conclude are undeliverable.

At first glance, this seems like protection for PB such that they won’t get in trouble for shipping things that are dangerous or prohibited by a certain country.  In those cases, the parcels are likely disposed of.  But this wording gives PB the right to “conclude” that any item is “undeliverable”.  Remember, PB is a business, and they need to make money, so what happens when they have only a few huge boxes (for example, vintage computers) to ship to an international destination, and/or the contents of those boxes are fragile and cannot be repacked and made smaller to fit into a shipping container?  PB would lose money passing them on to the buyer, and they’d also lose money returning them back to the seller.  So, rather than lose money in either case, they classify the item as “undeliverable” and it goes nowhere.  Their agreement with eBay (wording above) grants them the right to do this.

The reason this doesn’t turn into instant outrage on an hourly basis is that eBay compensates both the seller and buyer when this happens.  The seller gets to keep the money they received for the item, and since the item can’t be delivered, the buyer gets a refund for what they paid.  If the item is something common, then the buyer can just start their shopping again from a different seller or store.

But what happens to the original item if PB doesn’t want to deliver it?  The wording in the agreement grants PB the right to “liquidate” parcels.  While most people think liquidated means destroyed, it doesn’t:  In this context, it means apportioned, which is another way of saying reallocated and redistributed:

What happens to items that can’t be delivered to my buyer?

If eBay and/or Pitney Bowes determines that a GSP item is undeliverable eBay and/or Pitney Bowes may elect to dispose of, destroy or liquidate the undeliverable parcel, at which time title to the GSP item shall transfer automatically from you to eBay and/or any third party designated by either eBay or Pitney Bowes

In other words, the original item is given to a third party who can do whatever they want with it.  And what the third party does is resell it… back on eBay… using your original listing!

Need proof?  Here’s a listing from someone who sold an IBM PC to an international buyer:


They then were informed by eBay that their package was undeliverable, and both parties were compensated.  Then, a few weeks later, this listing showed up on ebay:


Not only is the original item for sale (with “No international shipping” predictably part of the shipping conditions), but the pictures were lifted wholesale from the original listing.

This is surely cause for outrage.  However, by merely using the GSP, you transfer all of your rights and have no recourse.  eBay’s official wording makes it clear that once the item is received by PB, not only do you lose all title to the item, you also lose any intellectual property rights your listing may have had:

Will content from my original listing be used if the item is liquidated?

Yes.In the event eBay or Pitney Bowes elects to dispose of or liquidate a GSP item, you grant to eBay, Pitney Bowes and/or any third party designated by either eBay or Pitney Bowes (as eBay and Pitney Bowes may elect) a royalty-free, perpetual, irrevocable, non-exclusive, transferable license to any intellectual property rights in the text or images presented in the original listing related to the GSP item, which exists or ever existed, including, but not limited to, the right to reproduce, prepare derivative works base on or display, any copyrightable elements, for the limited purpose of disposal or liquidation of the GSP item. You acknowledge that eBay or Pitney Bowes’s election to dispose of or liquidate the GSP item and allow you to retain your buyer’s payment for the GSP item is sufficient consideration for the transfer of title to eBay and/or any third party designated by either eBay or Pitney Bowes (as eBay or Pitney Bowes may elect in their sole discretion) and the grant of the license.

Always read the fine print.

Bottom Line

If you are selling something irreplaceable, such as a vintage computer or highly rare vintage software, do not, under any circumstances, use the GSP to sell internationally.  Doing so grants eBay the right to effectively destroy your item on a whim instead of delivering it.

Losing a vintage item is much more painful than simply being out the money you paid (or received) for it — the true cost is the loss of something that cannot be replaced.  Any seller of such items cannot, in good conscience, allow rare items to be put in this position.

I’d like to thank my colleagues at the VCF Forum for investigating and bringing this to light.

Posted in Vintage Computing | Leave a Comment »

What they don’t tell you about getting older

Posted by Trixter on July 16, 2018

I’m nearing 50.  I’m developing the usual amount of physical issues for someone who doesn’t take care of themselves, but nobody told me about the mental issues that follow.

The human brain is an organ, just like every other organ in your body.  It’s highly specialized, but then again so are most major organs.  As we age, our organs don’t perform as well: We are slower to perform, slower to adapt, slower to heal.  Sometimes organs that performed well in our youth start losing the ability to perform their primary function, such as your kidneys leading to early-onset diabetes.  And, I’m now finding out, the brain suffers from this as well.

It’s no secret that the elderly have easily-identifiable mental issues, mostly speed of processing and the difficulty of forming short-term memory.  What isn’t as well communicated is how less-than-peak-performance brain function affects you long before you become that old.  In the last few years, I find myself:

  • Sensitive to emotion and empathy.  I guess this comparison is inevitable, given my nerd pedigree, but it’s very much like Bendii Syndrome, where you feel emotion more strongly.  There have been times when I was expected to be impartial in a situation, only to find myself quite subjective and borderline irrational based on how I personally felt.
  • Feeling a pervasive sense of loss.  When I first started out in my career (and hobbies), I had an experience and intellectual advantage in my field.  Someone much older than me described me as “the smartest kid in the room”, and I definitely felt that way up until about 8 years ago.  You can see a definite correlation between how much I felt I was losing that and my demoscene productions from 2013 through 2015 — almost as if I was desperately trying to cling to that feeling of being the smartest kid in the room.
  • Being resistant to change.  As emotional response increases, logical reasoning has to fight harder to win.  There are many changes in last few years I’ve resisted because I felt about them a certain way, when logically they made perfect sense to me.
  • Tiring after periods of concentration.  What happens when you work a muscle too much?  It gets tired and hurts.  What happens when I have to learn something new, or concentrate on a difficult problem?  I feel fatigued.

There are ways to mitigate the above, but the cruel irony is that your brain is the organ that has to fix itself, and it’s malfunctioning.  I should get more sleep, exercise, eat better — but my brain wants everything to just go away.

Posted in Uncategorized | 5 Comments »

You cannot violate the laws of physics

Posted by Trixter on May 4, 2018

It’s technology refresh time at casa del Trixter.  I’m dabbling in 4K videography, and upgrading my 9-year-old i7-980X system to an i7-8700K to keep up.  Another activity to support this is  upgrading the drives in my home-built ZFS-based NAS, where I back up my data before it is additionally backed up to cloud storage.  The NAS’ 4x2TB drives were replaced with 2x8TB and 2x3TB (cost reasons) in a RAID-10 config, and it mostly went well until I started to see disconnection errors during periods of heavy activity (ie. a zpool scrub):

Apr 30 19:32:07 FORTKNOX kernel: sd 0:0:2:0: [sdc] Device not ready
Apr 30 19:32:07 FORTKNOX kernel: sd 0:0:2:0: [sdc] 
Apr 30 19:32:07 FORTKNOX kernel: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Apr 30 19:32:07 FORTKNOX kernel: sd 0:0:2:0: [sdc] 
Apr 30 19:32:07 FORTKNOX kernel: Sense Key : Not Ready [current] 
Apr 30 19:32:07 FORTKNOX kernel: sd 0:0:2:0: [sdc] 
Apr 30 19:32:07 FORTKNOX kernel: Add. Sense: Logical unit not ready, cause not reportable
Apr 30 19:32:07 FORTKNOX kernel: sd 0:0:2:0: [sdc] CDB: 
Apr 30 19:32:07 FORTKNOX kernel: Read(16): 88 00 00 00 00 00 08 32 11 70 00 00 01 00 00 00
Apr 30 19:32:07 FORTKNOX kernel: end_request: I/O error, dev sdc, sector 137498992
Apr 30 19:32:07 FORTKNOX kernel: sd 0:0:2:0: [sdc] Device not ready
Apr 30 19:32:07 FORTKNOX kernel: sd 0:0:2:0: [sdc]

At first I thought the drive was bad, so I replaced it.  I then saw exactly the same types of errors on the replacement drive, so to make sure I wasn’t sent a bad replacement, I tested the drive in another system and it passed with flying colors.  So now the troubleshooting began:  Switch SATA ports on the motherboard:  No change.  Switch SATA cables: No change.  Switch SATA power cables: No change.  Switch SATA cables and ports with one of the drives that was working:  No change; that specific drive kept reporting “Device not ready”.  I even moved the drive to a different bay to see if the case was crimping the cables to the drive when I put the lid back on:  No change.

It was really starting to confuse me as to why this drive wouldn’t work installed as the 4th drive in my NAS.  I started to doubt the aging Xeon NAS motherboard, so I bought a SAS controller and a SAS-to-SATA forward breakout cable so that the card could handle all of the traffic.  This seemed to work at first, but eventually the errors came back.  I then started swapping SATA breakout ports, then entire SAS cables, then eventually a replacement SAS controller.  In all instances, the errors eventually came back on just that single drive, a drive that worked perfectly in any other system!

The solution didn’t present itself until I started building my replacement desktop system based on the i7-8700k.  In that system, I opted for a modular power supply to keep the cable mess at a minimum (highly recommended; I’ll never go back to non-modular PSUs).  When I was putting my video editing RAID5 drives into the new desktop, I noticed with irritation that each of the modular SATA power cables only had three headers on them instead of four.  This sucked because I was hoping to use one SATA power breakout cable for all four drives, and now I’d have to use two cables which added to the cable clutter inside the case.  This power supply was Gold rated, high wattage — why only put three SATA power headers on a breakout cable?  In thinking about the problem, I came to the conclusion that the makers of the power supply were likely being conservative, to avoid exceeding the limits of what that rail was designed to provide.

And that’s when I remembered that I was putting four drives on a single rail back on the NAS, and not three like the new power supply was enforcing.  When I moved the misbehaving NAS drive to a SATA power header on another rail, all of the drive disconnection problems went away.  Whoops.

How did this work before?  The power draw of 2x8TB + 2x3TB drives was just high enough to be dodgy, when the previous configuration of older 4x2TB drives was not.  The newer drives draw more power than the older drives did.

Lesson learned, and now I have spare controllers and cables in case there’s a real failure.

Posted in Technology | 5 Comments »

A Reason to Disregard Copyright Law

Posted by Trixter on April 25, 2018

(This is a short rant about morals and obligations, not the ability to make money off of your work.  If you rely on copyright law to earn your living, you are not the audience for this article.  It is also not a call to action to break the law, which you do at your own risk without holding me responsible.)

Everyone knows that anything created more than 90 years ago is no longer eligible for copyright and is in the public domain, right?  Not so fast: In 2011, copyright law was amended in a way that affects audio recordings:  The composition might be public domain, but any recordings of it are not.  From https://www.copyright.gov/docs/sound/pre-72-report.pdf :

As a consequence of this legal construct, there is virtually no public domain in the United States for sound recordings and a 55 year wait before this will change. To put this in perspective, one need only compare the rules of copyright term for other works. For example, a musical composition published in 1922 would have entered the public domain at the end of 1997, but a sound recording of that same musical composition that was fixed the same year will remain protected for another 70 years, until 2067.

That means this recording from 1887 will somehow have a copyright term of 180 yearsThat’s seven generations.  But hey, the recording artist’s great-great-great-great-great-grandchildren will still have the right to earn money off of the work or remove it from public consumption, and that’s the important thing, right?

It’s laws like this that turn people like Jason Scott into crazed archival psychopaths.  While I don’t agree with his methods, we definitely share the same concerns:  Copyright law in the US is not merely unrealistic, but works to actively and permanently destroy original works every year by preventing their archival.

I had this conversation with my father recently, who is a numismatist internationally recognized as an expert in some areas in his field. He has archives of the articles he’s written over the past 50 years, but wants none of it archived in a public forum because the work is technically under copyright under various publications’ names, and he holds a firm moral stance on playing by the rules. In my opinion, this dooms his work: None of those publications have publicly-accessible archives. His articles are already inaccessible (unless some library somewhere has numismatic publications from 40 years ago, which is not a realistic expectation as libraries these days are migrating towards digital as a way to save money and offer more services), so unless somebody does something, his work over the last half century will likely be lost forever.

Nobody cares about your work enough to preserve it after you’re gone. If you don’t do it, it won’t get done. The changes in information technology over the last 20 years have enabled the human race to produce so much content per second, over so many different topics and channels, that anything you produce has hardly any audience.  Worse, any tiny audience you might have rarely has the free time to consume it.  At the end of the day, that leaves a handful of people who consume your content and care about preserving it for future consumers, but the existing state of copyright law enforces penalties for that.  The days of printing a book and expecting it to be available in public libraries for 50+ years are long gone.

For those who desire copyright law so that they retain control over their work, realize this:  Our contribution to the world — our only purpose in life — is what we do, and what we leave behind.  If you write a book, it’s ostensibly because you have something to say, and want people to read what you have to say.  You create a work that you’re proud of, and release it into the world.  Copyright law grants you the right to request it destroyed at a later date, but it is selfish, petty, and immature to do so.

Posted in Software Piracy, Uncategorized | 1 Comment »

Color #23 (CGA Update)

Posted by Trixter on February 25, 2018

There was a time when conversion of real-world images to computer screens was considered a black art. Those days are gone, but it’s still fun to dabble.

The Mann Cave

My friend Jim ‘Trixter’ Leonard is a bit of a CGA aficionado.  You know, that computer graphics mode from the 1980’s?  He did a nostalgic conversion of my photo into one of the two available 4-color pallets (I think that’s correct?) and the results are dithertastic.  Take a look.


Pretty remarkable how well the photo holds up with such a severe restriction on colors.  Thanks for the fun conversion, Jim!

View original post

Posted in Uncategorized | Leave a Comment »

The first 256-color game on the IBM PC

Posted by Trixter on October 1, 2017

In April of 1987, IBM announced the PS/2 line of systems. This was their advancement of the PC standard, but also an attempt come up with a new architecture that could not be cloned without paying a license fee. Of the new features announced, one of the most significant was the addition of MCGA, which created the 256-color mode 13h that we are so familiar with today. VGA was announced at the same time, which emulated all previous color standards including MCGA’s mode 13h.

This background is necessary to answer the following question: What was the first 256-color game on the PC? This was posed recently on a vintage gaming group on Facebook, but it’s a loaded question because it isn’t specific enough. It could mean any of the following:

  • What was the first PC game to use MCGA’s 256-color video mode (ie. mode 13h)?
  • What was the first PC game to use mode 13h to display more than 16 colors onscreen simultaneously? (Many early MCGA games physically used mode 13h only to reproduce the existing EGA/Tandy 16-color graphics the game shipped with, instead of taking full advantage of the extra colors mode 13h provided.)
  • What was the first PC game to fully take advantage of mode 13h to show (nearly) 256 colors onscreen simultaneously?

The aforementioned facebook group conversation devolved into relying on memories and friendly arguments. This is always dangerous, because our minds are not static records. To get an answer out of memories, you need a very large group of people coming to a consensus; even then, that’s not guaranteed to be the correct answer, just an agreed-upon answer. (Some historians would argue that collective memory is the only record that matters, but a discussion of archiving history is out of scope for this article.) In that facebook conversation, and previously reported in various places like Wikipedia, the answer was 688 Attack Sub. The latest executable file date in that game is 1989-03-04. (For reference, all dates in this article are of the format YYYY-MM-DD). That seemed a little late to me; tech adoption moved a little more slowly back then, but even still, it seemed odd that it would take two years for the first game to come out to take advantage of a new graphics standard. Was 688 Attack Sub really the first to support the 256-color mode provided by MCGA and VGA?


Thanks to MobyGames and various archives floating around the internet, we can perform actual research to answer these questions, instead of relying on memory and friendly arguments (which is what the facebook conversation eventually devolved into). I co-founded MobyGames for many reasons, but the biggest reason was to answer questions like this, so I immediately took up the task. Here was my methodology:

  1. I started at MobyGames by looking up all games that support VGA and MCGA in the years 1987, 1988, and 1989. Because MobyGames contributors are awesome, all of the search results provided screenshots, so it was easy to weed out any game that claimed VGA/MCGA support but was just displaying the same EGA 16-color graphics the game already had. In other words, for a game to be considered truly taking advantage of mode 13h, it had to display something other than the stock 16-color EGA/Tandy palette.
  2. I then analyzed screenshots to determine the number of unique colors in the screenshot. This was mostly used to determine if a game was actually displaying more than 16 colors, or just showing 16 and changing the palette via VGA color registers.
  3. Once I had a list of candidates, I then grabbed all revisions of the games in question and compared file dates of the executables, and used the latest file date as a “supported here first” date. (This is not the same as the game’s release date, ie. the date that the game was distributed by the publisher — those are much harder to verify due to lack of archived software publishing trade information, and besides that’s not the question I’m answering. I’m answering “who was first to support the tech”, not “who got their game into customer’s hands the quickest”.)
  4. Finally, I discarded everything after 1989-03-04, the executable file date of 688 Attack Sub, our known control point.

Here’s what I found, in chronological file date order:

1987-06-08: Thexder‘s earliest shipped revision shows a date of 1987-06-08 on a file named MAINPS, which is the executable overlay that provides “PS/2 graphics support” (ie. MCGA, ie mode 13h). This was surprising to learn, as I assumed Sierra’s AGI games would have supported it first, but the first AGI games to support mode 13h to display graphics of any sort have VG_GRAF.OVL file dates of 1987-11-16 or later. Thexder uses mode 13h to display graphics using 16 custom colors chosen to approximate how the game looks when running in its “best” mode, EGA 640×200.


1987-12-16: Rockford: The Arcade Game displays 32 colors onscreen at once. (32 is significant; see Conclusions at the end of this article for an explanation.)


1988-02-04: DeluxePaint added support for MCGA 256-color graphics in a version with an executable file date of 1988-02-04. While this isn’t a game, it played a large part in mode 13h support in games afterwards.


1988-10-05: Dream Zone uses digitized 16-gray-shade graphics as part of gameplay, then managed to eek out using one additional color when drawing the interface, for a total of 17 colors onscreen. This game is included here not because it is a serious contender in our research, but rather because it was the earliest PC game I could find that used digitized graphics that also used the MCGA palette registers to create a custom color palette — or, at the very least, used any of the default 256 colors that are created when mode 13h is initialized. (This beats out Moebius on the PC which has a later executable date.) Dream Zone is also notable in that it was created by the founders of Naughty Dog when they were 20 years old.


1988-10-15: F-19 Stealth Fighter rasterizes the 3-D graphics using mostly the same 16 colors as as EGA during action gameplay, but when playing the game using MCGA, it uses the extra colors to draw the cockpit graphics using many color shades. This brings the number of colors up to 52 onscreen at once in typical gameplay.


1989-03-04: 688 Attack Sub starts off with a 210-color title screen and then uses many colors throughout gameplay: The Soviet control room (also pictured below) uses 73 colors; other screens produce a 16-color interface while an additional 16-element grayscale digitized picture is also shown; all elements of gameplay use a custom palette.



Based on the above, let’s answer the questions we originally posed:

  • What was the first PC game to use MCGA’s 256-color video mode (ie. mode 13h)? Answer: Thexder.
  • What was the first PC game to use mode 13h to display more than 16 colors onscreen simultaneously? Answer: Rockford: The Arcade Game.
  • What was the first PC game to fully take advantage of mode 13h to show (nearly) 256 colors onscreen simultaneously? Answer: 688 Attack Sub.

Historical Context

Based on the above research, and my personal experience during this time period (if my own memory can be trusted), I can expand on these results with a little historical context. Consider this the “trivia” section of the article.

The “VGA” support in Thexder is truly MCGA support; this is reflected in the name of the graphics overlay, MAINPS, the PS part referring to mode 13h support as “PS/2 graphics” before the rest of the industry just started calling mode 13h “VGA graphics”. But if VGA also supports mode 13h, why don’t you see MCGA graphics when running Thexder on VGA systems? This is because the game is programmed to prefer EGA graphics over other display standards. Thexder’s native graphics are 640×200 in 8 colors as ported over from the original NEC PC-8801 version, which the developer emulated by running the game in EGA 640x200x16 mode. The developer felt the game looked best in its original graphics, so it shows the game that way on any system that can support EGA — and early PS/2 systems didn’t support VGA, only MCGA. So, the only way you can see the MCGA code/colors kick in is if you run it on an IBM PS/2 Model 25 or Model 30-8086, the only systems with MCGA graphics. If you don’t have one of these systems, you can force MCGA graphics to kick in by making a copy of your disk and then copying MAINPS to MAINEG, which will replace the EGA graphics code with the MCGA graphics code.

Rockford: The Arcade Game uses exactly 32 colors. Why is this interesting? Because it shows that the graphics were composed on an Amiga (most Amiga games were 32 colors), and instead of downconverting them to 16 colors like most PC ports did in the 1980s (see Airball for a typical example), the developer decided to support mode 13h so that all 32 colors could be displayed exactly. (Interestingly, the Amiga version of Rockford was released after the PC version.)

F-19 only barely supported mode 13h. It was a nice touch to draw the instrument panel using 32 shades of gray, but this is the only place in the game where additional colors are used. It serves the letter of the law, but not the spirit of the law.

688 Attack Sub makes use of DeluxePaint fonts on all versions of its title screen, which means it used DeluxePaint during development. Also, an earlier demo of the game while it was still being developed (date is 1988) shows that it originally did not fully support 256-color graphics (in the linked screenshot, the background uses multiple colors, but the foreground elements were definitely drawn to the default 16-color palette). Was support of mode 13h in 688 Attack Sub considered only after a paint program existed they could use to create such graphics? Hopefully John Ratcliff, Michael Kosaka, or Wilfredo Aguilar can clarify the development timeline.


I referred to 256-color mode as “mode 13h” because that’s the exact MCGA video mode in use for all games listed above. Later games used VGA’s flexibility to create 256-color modes in higher resolutions, or with multiple video pages, or split-screen, etc. but these were truly limited to VGA, and didn’t show up until 1990 or later. So I chose the exact technical term because it defined a consistent scope to use for comparisons. (If you want to do your own research down the rabbit hole of “What was the first game to tweak VGA into new video modes?” then you can use MobyGames to start that journey.

All of these answers are to the letter of the law, but not necessarily to the spirit of the law. In my completely subjective and unscientific opinion, the first PC game to truly show off MCGA with 200+ colors on every single screen is Mean Streets (because most of the screens were digitized in some way). Coupled with crude video sequences, and the ability to run at a decent speed on an IBM PS/2 Model 25, it’s a great early game to show off what MCGA was capable of.

MobyGames information may not be complete for the time periods researched. Given the nature of recording history, it may never be complete. If you feel I missed something, please contribute the missing game(s) to MobyGames so that future research can be more accurate.

Posted in Gaming, MobyGames, Vintage Computing | 8 Comments »

How to make better YouTube videos

Posted by Trixter on July 4, 2017

I’ve been dabbling with making content-specific videos for YouTube for many years.  Some of my videos have gotten a respectable amount of views and are full of nice comments, while other well-meaning content has forever fallen into the unwatched abyss.  It’s been fun, but I have been giving thought to producing slightly more polished and professional work to see if I can attract a larger audience and get them interested in retrocomputing and personal computing history.

It is easy to get carried away with investing a lot of time and money into making better videos, so I felt some research was warranted before barreling ahead.  I reached out to a few friends and acquaintances, some with podcasting or YouTube careers (ie. they earn their living making media) for some advice.  I also scoured several “How I make my YouTube content” videos from a few channels that make content similar to what I want to produce.  The advice and research was so helpful that I felt it could benefit anyone else in a similar position, so I’ve reproduced the best tips here.

Before reading on, be aware that these are all tips for you to make better videos that you are happier with.  This is not the same thing as making videos that get more views or are more easily monetized (which usually involves pressure to make videos you are unhappy with).

Don’t be afraid to write a script

It may seem daunting to watch some of your favorite YouTube presenters and see them rattling off perfect copy directly to the camera, and wonder how you could ever get that polished.  Some of them really are good at improvising in front of a camera, but it may surprise you to learn that some of them are actually reading from a teleprompter.  This isn’t a sign of weakness, but rather a way to speed everything up while ensuring accuracy: You can put notes in a script for any cutaways or graphics you want to use, which can help during video editing.  It also ensures you won’t forget anything you want to say or stumble over your words.

On a related note:

Don’t be afraid to put yourself in front of the camera

The only drawback to using a script is that you may find yourself with something you really want to say, but have no graphics or cutaways to illustrate what you’re presenting.  David Murray (aka The 8-Bit Guy) gave me the helpful advice that he uses himself talking on-camera as a fallback when he needs to fill video with something he lacks assets for.  (What a brilliant idea!)

There is another benefit to putting yourself on camera:  It may surprise you to learn that some of your potential audience is actually more interested in you than the material you’re presenting.  Content is king, but what gives a YouTube channel a personal feel is you, a unique person, with a unique viewpoint.  There is a portion of your audience that wants community more than information; they are already familiar with your video’s basic content, but they want to hear other opinions and viewpoints.  It reminds me of Douglas Adams’ quote about nerds:

“…a nerd is a person who uses the telephone to talk to other people about telephones. And a computer nerd therefore is somebody who uses a computer in order to use a computer.”

Nerds want to commune with other nerds, so no matter how awkward you feel, see if putting yourself in front of the camera feels right.  If you can show off who you are at the same time you present content, you win twice.

If you are worried about remembering what you want to say, or being awkward on camera, a cheap teleprompter can help reduce performance anxiety.

Background music is a quick way to add polish

If you think your videos feel “stale”, a quick way to add some production value is adding background music, as long as it isn’t distracting or makes the narration/talent hard to hear.  That’s obvious, but you can’t use any music you like because it might be copyrighted and/or not licensed for YouTube videos.  Luckily, there are repositories and catalogs of music you can use that are either public domain, Creative Commons-licensed, or otherwise free for non-commercial use:

YouTube Audio Library: The most obvious choice, and a good starting point.  Everything there can be used even if you monetize your videos, although a short credit/text in your video’s description may be required.  I like starting here because they allow filtering by duration.

Epidemic Sound: A very generous license (as of this writing, $15 a month) and even if you cancel the service your previous videos are still licensed.

Free Music Archive: A searchable catalog of tons of music with various licenses.  However, be sure to read the fine print, as not all music on the site is licensed for video.

Freeplay Music:  If you’re not monetizing your videos, all music on the site is available for you to use in any video for any length of time.  If you decide you want to go commercial with your content on YouTube, they offer paid licenses for you to use clips of music forever (ie. something like $250 to use one music track for unlimited numbers of videos or URLs).  Check individual licenses for details.

There are many others, as well as multiple resources for finding them, but the above should get you started.

Poor audio recording is WAY worse than poor video

Most people who watch YouTube are forgiving of poorly-created video because of cell phones:  Almost everyone has shot a shaky or poorly-lit cell phone video, and for most people, that kind of quality is “good enough” for getting the point across.  Humans are pretty good at filtering the signal from the noise when it comes to visual content.  Poor audio, however, is more jarring and requires more concentration to listen to.  Unlike bad video, poor audio is much more difficult (or impossible) to fix later in post-production.  If you recorded outside and wind noise completely drowned your voice out, or if you recorded inside in an empty or harsh room and your voice is echoing, there’s no way to fix that.  The only solution is to record decent audio in the first place.  That means:

  • Use the right tool for the right job:  Don’t use an omnidirectional microphone for recording just your voice, but instead use a cardioid mic for close vocal sources, or a shotgun mic for farther-away vocal sources.
  • Don’t record in a harsh room (bare walls, no carpet, etc.) no matter what the situation is.  If you have no choice, hang heavy blankets on the walls to try to stop sound from echoing.

If recording voiceover and podcasts, Jeremy Williams of tested.com swears by this combination:

  • Audio-Technica AT2020USB Cardioid Condenser USB Microphone: You don’t need a functional sound card to use this; it is powered by and transmits data over a single USB cable.  (Any cardioid condenser mic is acceptable, if you don’t want to use USB, but this specific model comes highly recommended.)
  • An AT2020 Foam Windscreen Pop Filter: To protect against plosives from ruining your audio.  Most windscreens and pop filters are cheap, so there’s no reason to skimp on this.
  • RODE PSA1 Swivel Mount Studio Microphone Boom Arm: Good voiceover means being able to record comfortably and consistently while reading copy, and a boom arm facilitates this.  It also makes it easy to store the mic out of the way when not in use.

Some people are wary of getting a dedicated microphone that is only good for picking up voice, and want something more versatile.  A lavaliere microphone of sufficient quality and placement can make a big difference in recording your voice if you are in front of the camera (or even off-camera) and lets you move around while you present your content.  You may also be able to use a field recorder, such as the Zoom H4n, for multiple scenarios.  Corey from My Life In Gaming uses a Zoom H4n for his narration, and it sounds very good, I came up to this site http://epicgifts.net/15-pokemon-gifts-for-true-fans where I was able to find the best gifts for gamers online, specially for those who love pokemon go. Corey was playing this new castle escape game, it seem very addicting so check it out first. And because it’s a field recorder, he can record his voice as well as two other audio tracks and mix them in the recorder to save time, or store them individually for flexibility in the video editor. Recently My friend bought this Zockersessel and he loves it, it’s really comfortable. .

Good lighting can drastically improve any shooting situation

All cameras record light bouncing off of objects.  The more light that hits a digital camera sensor, the less work the camera has to do.  To illustrate this for yourself, shoot some video outside on a sunny day, then shoot some video inside at night with all the lights on.  Even though your electric lights are bright enough for you to see comfortably at night, you’ll notice that the inside footage will have more noise and grain than the footage shot outdoors.  This isn’t limited to high-end cameras; it’s a general principle that works for any camera, including the camera on your cell phone.  More light — as long as it isn’t a harsh spotlight or casting visible shadows — makes all video footage better.

The best compromise to getting better light in your shots is to use at least two bright photography/videography lights with some softboxes to diffuse the light so it doesn’t cast harsh shadows.  Clint of Lazy Game Reviews was kind enough to let me know what he currently uses, and also used previously when his budget was tighter:

  • For those on a budget, Cowboy Studio makes some affordable lighting kits appropriate for video production.  They’re on the cheap side, but they perform admirably.
  • Clint currently uses LED Panels.  These are more expensive, but give more much more flexibility in how you can use them:  Some panels can run off of batteries, some have adjustable color temperature, and overall they generate less heat.

He concluded with the following:

I typically use at least three lights when I’m filming. Usually 2 LED panels in front/side and then something overhead, like another LED light or even just the light in my room.

Practical advice.

Use manual focus and tripods wherever possible (even cell phones)

Autofocus isn’t perfect; it can have a hard time tracking a moving subject, or “refocus” randomly for seemingly no reason when both the camera and subject are completely still.  If you are shooting a stationary (or mostly-stationary) subject, use a tripod, and use manual focus on your camera.  You’ll get consistent results every time.  Amazon makes a cheap, perfectly functional tripod that is great bang for the buck.

“But I shoot everything with my cell phone!”, you cry?  Not to worry, there are cheap cell-phone tripod adapters available from just about everywhere.  And any smartphone made in the last 3 years allows you to fix the focus while shooting, usually by tapping or long-pressing the area of the shot you want to keep focus-locked.

If you have more money than skill, shoot in 4k

If you have a camera or cell phone capable of 4k recording, try shooting everything in 4k even if you are making a 1080p video.  Why?  Because the extra resolution of 4k allows you do fix all sorts of errors in post-production.  Didn’t zoom in enough?  Is your subject framed a little left of center?  Was the camera rotated slightly on the tripod?  Forgot to film a quick close-up shot?  You can fix all of this in the video editor, and even though the 4k footage is manipulated, it still has more resolution than your 1080p target as long as you don’t zoom past a 1:1 ratio (ie. 100%) of the original footage.

Putting it all together

Armed with the above advice, I decided to revisit making videos, and came up with this on a whim:

While the result is far from optimal, it represents a quality baseline that I am happy with and that I feel can steadily improve on.  Here’s how the video was produced, from complete start to finish:

  1. Thought about what I was going to do and how I could make sure I could get video and photos while doing it: 20 minutes
  2. Performed the activity, gathering media along the way: 6 hours (much of it spent waiting for the chemical process to happen)
  3. Wrote a script that contained everything I wanted to say in the narration: 90 minutes
  4. Recorded the narration: 60 minutes (I needed several takes before I felt I sounded like I wanted to, which was a balance between formal, informal, and natural)
  5. Imported all assets into a video editor, laid down the audio track, cut video to the narration, researched license-free images from the web to fill in the gaps for areas I didn’t have original footage for, and researched license-free music to use as a background audio bed: 4 hours
  6. Export + encode + upload the result: 1 hour

So yes, it took ~14 hours to produce a 5-minute video, but that was because I was inexperienced, and I also could have cut the time down by doing some production tasks while I was waiting for the chemical reaction to finish, like researching the background audio beds, writing parts of the narration, or tracking down graphics I needed.  Before you think that’s crazy, a lot of top-notch YouTube content does indeed take 10+ hours to produce a 10-minute video, even when people are experienced.  It all depends on whether or not you require a script to stay focused, or whether you are capable of winging it.

The most important advice multiple people gave me was this:  Make sure you’re having fun.  If it stops being fun, then you’ve turned your hobby into a depressing chore and ruined it, so back off or change direction if you start hating what you’re doing.

For more tips and inspiration, here’s some leisure viewing:

Posted in Digital Video | Leave a Comment »

How to explain programming to the total beginner

Posted by Trixter on June 6, 2017

Explaining what programming is in 2017 is likely best done by introducing the student to Scratch:  All logic is graphical, visual, and onscreen, which matches the most common output medium they’ll develop for.  But how do you explain programming like it’s 1979?

This topic came up recently in a vintage computer forum, where a long-time-listener-first-time-caller to the hobby decided they were going to do what they didn’t do in their youth:  Learn to program on their early home computers (which is one of the things they were originally marketed for anyway).  What struck me about this request is that the OP was truly coming from zero knowledge; they couldn’t comprehend how “ones and zeros” eventually turn into pac-man, or a drop-down menu, or a spreadsheet.

To help explain what programming itself is to the total layman, I came up with a metaphor that I thought was simple enough that I’d reproduce it here:  Programming is like cooking.

How is programming like cooking?  Well, how do you cook food?  To cook food, you need raw ingredients, cooking utensils, and recipes.  You read the recipe, which tells you which utensils to use on which ingredients and in which order.  The end result is a meal.  How does this relate to programming?  Like this:

  • Cooking utensils = CPU instructions
  • Raw ingredients = data
  • Recipe = A computer program

A program is a list of steps (recipe) that use different CPU instructions (utensils) to process data (ingredients) into a new format (a meal).  That’s all programming is.

Okay, but how do you make anything of any complexity, like a game or a drop-down menu?  Let’s extend our cooking metaphor:  Say you had to provide a ton of different, complex meals for a large catered event.  To do this, you have several recipes, each producing a different result.  For such a large event, it’s possible that some recipes exist only to create portions of food that themselves are used in larger recipes for a more complex meal.  The process is no different than making a single meal — you’re just doing it a lot more, with more recipes, for a bigger and more complex result.  How does this relate to programming? Like this:

  • Catered event = A complex thing the user sees on the screen
  • Many recipes = Many different programs
  • Smaller recipes that produce food to be fed into the larger recipes = Subroutines, library calls, API calls

I admit, I was eating lunch when I came up with this metaphor.  But maybe it can help you explain programming to your grandparents.

Posted in Programming, Vintage Computing | Leave a Comment »