First off, why TGA files? Well my game is meant to have the look and feel of a Nintendo DS or Gameboy Advance game. These systems draw using a 16-bit colorval format: 5 bits each of red, green, and blue, plus one alpha bit, or 'ARGB1555' for short. The alpha bit indicates either completely transparent or completely opaque. I vaguely remembered that eons ago, TGA also had a 16-bit ARGB1555 format, so it seemed like a good fit.
So I checked the latest spec for TGA v2.0, and sure enough it was a format that TGA supported. This was going to be great, because it was going to save me 50% on memory storage of uncompressed images.
standards vs implementationThere's a lot of cool features in that spec. Plenty of ways to store your own custom metadata inside of each image. There is plenty of flexibility over color formats and the like. Color correction tables. Gamma correction. Key colors. It is simple enough for a single programmer to implement in a couple of days. Looks like a slam dunk.
There are dozens if not hundreds of implementations of TGA encoding and decoding. I looked through a couple dozen of them. Notably libav, imagemagick, netpbm, freeimage, SETI@home, GIMP, Quake, Free Space, and UWisc's libtarga.
None of them have has implemented that specification in full. As far as I can tell, everyone uses a subset of features that is the 'de facto' standard. All the fun metadata is never used. The many varieties of bit depths are never used. The de facto standard is a stripped down version that stores RGB888 and ARGB8888 with a minimalistic header.
And since recent drawing programs will rarely output any bit depth other than RGB888 or ARGB8888 or will output any of that new header data or use any of those features, the fact that other features and bit depths are valid is moot.
It isn't about what the standards says, it is about what you can get people to implement.
in C, portability implies reinventing the wheelOne bit of comedy when looking at all these implementations is how each one has defined its own type for unsigned 8-bit and 16-bit unsigned little-endian integers and each one has defined its own nomenclature for functions to read them from disk. Each one, in trying to be portable, is built upon little more than libc. And in so doing, each one reinvents the wheel.
- Free Space: ubyte, short, cfread_ubyte(), cfread_ushort()
- ImageMagick: unsigned char, unsigned short, ReadBlobByte(), ReadBlobLSBShort()
- libav: uint8_t, int, bytestream2_get_byte, bytestream2_get_le16()
- Blender: uchar, short
Why did C never develop a thin layer on top of libc (like glib or gnulib tries to be) that became broadly accepted? Where is C's equivalent of Boost?
too much codeWhen I look at Savannah, Github, Gitorious, Google Code, Sourceforge, I realize that there is an implementation of every existing idea of simple to moderate complexity. (With the obvious exception of a feature-complete TGA 2.0 library.) There is an ocean of code. If it all could be cleaned up, sorted, indexed, documented, collated, then we could create the Grand Unified SDK. Any new project would be trivial or moot. But, more often than not, it is easier to begin again than wade through it all, try to find the gems, understand their logic, work around their lack of documentation and fix their bugs.
I envy PHP coders sometimes. Their standard lib is quite amazing.