Home       About Me       Contact       Downloads       Part 45    Part 47   

Part 46: Back To Work

January 6, 2012

After goofing off for the last three weeks (a book, a movie and a game every day!), my need to work on this project has reasserted itself. To get back into it, I decided to clean up a couple of long-standing nuisance items.

Overlay Graphics

My current 2D graphics support is only implemented under Windows. As I put more UI into the game, I need to have this work on Linux and Mac. I've put too much work into supporting those platforms to just let the code rot again.

The OpenGL code on all three platforms will take memory and create the overlay texture. Then I write that texture over the 3D graphics as the final step in building the scene. That's not a problem. The problem is to produce the text and 2D graphics and get them into the texture.

On Windows, I am using a really slow, ancient way to do this. I have two offscreen bitmaps, one for RGB planes and one for the alpha plane. I use Windows GDI to write my text and graphics to these bitmaps (everything is written twice, since GDI doesn't understand 32-bit targets). Then I extract the three bytes per pixel of the RGB bitmap and the one byte of alpha bitmap and build the RGBA memory used to create the texture. This has to be redone whenever the overlay text changes.

To implement this under Linux and Mac, I have three possible approaches:

  1. Following the same model as Windows, on Linux, I can use XLib text and graphics to an offscreen bitmap, and convert that to a texture. On the Mac, I can use Quartz text and graphics. I'd have to learn what I need out of these two libraries, which is a nuisance, but it would work. I'm not sure if I need a pair of bitmaps for graphics as I do under Windows. It depends on what they support in the way of rendering targets (more below.)

    This has one fundamental problem -- the resulting graphics will not be pixel-for-pixel identical across the platforms. Different fonts, different font rendering techniques and different 2D graphics will create different results. They will be minor and might not matter, but having something like a text label that fits on Windows but not on the Mac would be annoying.

  2. I could do all my own text and graphics in memory and create the texture from that. To do text, I could use the FreeType rendering library, which takes TrueType and other font formats. For 2D graphics, I'm currently just using filled rectangles and scaled images, so that's not hard. If I wanted a richer set of graphics primitives, I'm sure I could find code for it somewhere.

    The advantage of this approach is that the same code runs on all three platforms, the results are identical, and I don't need to learn more XLib and Quartz. The disadvantage is poor performance and perhaps poor quality of text. I also have to package the fonts I use with the demos, making for a larger distributable. If I ever got around to supporting the iPad, this is a pretty bulky, memory intensive, heavy-handed way of solving the problem.

  3. To improve performance, I could use the FreeType library, but create textures out of the individual letters, make the overlay texture a rendering target and use OpenGL to write the text. I'm not sure how much of a performance improvement this would be, and there are a number of issues.

    I would have to manage a cache of character images, swapping them in and out of a large texture in display memory. I'd have to learn how to use a texture as a rendering target, which I'm not sure even works under OpenGL 2.1. To draw images, I'd have to convert them into textures and draw textured rectangles. More complex shapes would have to be implemented as triangle lists. It's a lot of extra code to write, and only worth it if the performance of doing graphics in the CPU is horrible.

I've downloaded the FreeType library and managed to compile and run it, so that looks usable. I haven't gotten far enough to get performance numbers. There are other issues I'm not sure about.

Fig 1: Antialiased text
Fig 2: Antialiased in RGB
Fig 3: Antialiased using Alpha plane

Nice Text

If I draw a string with Windows (or any of the other platforms), it's going to use multiple levels of grey to anti-alias the text. In fact, it's going to combine the text color with the background color (Figure 1.)

I don't want text to do that. Windows is blending the text in with the RGB image, and I want a transparent overlay image that blends with the background using the alpha plane. If I took the text anti-aliased in RGB and just masked it in alpha to combine it with the screen, I'd get something like Figure 2.

I want an RGBA image, where the text color is constant in RGB, and the alpha plane has varying values to implement anti-aliasing. This will combine correctly over arbitrary background graphics (Figure 3.)

To get Windows to do this properly, I'm drawing the text in the alpha plane only. There Windows is combining the background (which it thinks is black) with white text and getting the correct alpha values. For the RGB planes, I just draw a solid block of color. When sent to the display, the solid color only gets drawn when the alpha values are non-zero, giving the correct appearance.

This works for simple cases, but not when I want to write text over other graphics. To do it right, I need to set the correct RGBA values for each pixel of the text, not have this solid block plus alpha approach I just described.

I can implement this correctly with the FreeType implementation when I copy in the character image it produces. I'm not sure I can get what I want with XLib or Quartz.

LCD Anti-Aliasing

Fig 4: LCD text

It turns out though, that this is not all there is to rendering text. If you enlarge a typical Windows screen image, you see Figure 4. This is "ClearType" anti-aliasing, which takes advantage of how an LCD monitor displays color.

Each pixel on the screen is in fact a row of three dots -- blue, green and red. They are small enough that your eye integrates them into a single colored dot. The thing is, if you draw a red pixel, you are only lighting up that right-most dot of the triple. In effect, you have a third of a pixel. ClearType uses this to improve anti-aliasing, which is why Figure 4 shows red edges on the left and blue edges on the right of each black pixel. Surprisingly, your eye will not notice these color fringes, and so the text still looks black.

The FreeType web pages say that:

The colour filtering algorithm of Microsoft's ClearType technology for subpixel rendering is covered by patents. Note that subpixel rendering per se is prior art; using a different colour filter thus circumvents Microsoft's patent claims.

I don't know what I'd have to do to implement a different color filter with their package, but Ubuntu Linux text is using this same technique, so I guess I could search for the code.

More to the point though, I can't do this with text in my overlay plane. The final color is a combination of the overlay color and the scene color. The sub-pixel anti-aliasing could only be done on the combined pixels, which I don't have except in the display.

Unicode

The other thing I'd like to fix before I go much farther on the demo is handling of non-English characters. Every time I start a new project with Visual C++, it defaults to Unicode, and every time, I turn it off and use old-fashioned ASCII. This is because the last time I did anything with wide characters (probably 15 years ago), it was a disaster.

I was particularly embarrassed though when I created the simple chat client in Part 39 and could not support any other languages. I felt like the "Ugly American" who can't be bothered to deal with the rest of the world. Going forward, this really needs to change.

I Googled around a bit and decided things didn't look too bad under Windows. In my XML files, I can use "UTF-8", which supports Unicode characters as an extension to ASCII. Windows seems to handle Unicode in file names correctly, and the "wide" version of the standard library file routines worked fine. I created a file with Unicode characters and a Unicode name, saw the file name correctly in the file manager, and edited it with both Notepad and Visual C++.

This is all under Windows 7, and I have no idea if it's handled as well under Windows XP.

Changing all my code to deal with this is a pain, but shouldn't take too long. I can change my string class and the interfaces to the major classes, and the compiler should flag all unconverted uses as errors. The only really insidious errors will come from computing the size of character arrays incorrectly when I'm allocating them. Hopefully, there isn't much of that.

The missing piece will be user input. To support cut and paste from the "Charmap" application, or code input of the Unicode characters, I'll need to add some Windows-specific code to the framework. I'm not even sure what UI to use. On Windows 7, holding down the Alt key and typing a decimal code on the keypad works, but under Linux, you type cntl-shift-U and then hex code off the ordinary keys. I'm not sure what you do on the Mac.

The biggest problem is that I have no way to test non-English input. My U.S. keyboard just won't generate any Unicode characters.

Complications

One complication under Windows is that the system seems to want to prefix UTF-8 files with a "Byte order mark". The UTF-8 documentation warns against doing this, since many Unix programs look at the start bytes of a text file to decide how they are processed. Since I'm doing my own file parsing, I can accept this if Notepad or other program creates it, but do without it when it's missing.

I then turned to Linux. First, I tried to use WinSCP to copy my little test program and Unicode-named file to my Linux machine. WinSCP doesn't list the Unicode file names correctly (it's all '?' characters), and won't copy it. Sigh.

Once I get the test file over there, Ubuntu GEdit handles it correctly. However, in the test program, none of the Windows wide stdio routines are implemented, so it doesn't compile. With more research, I see I'm supposed to set the "locale" of the program to UTF-8. Even with this, it's not going to be one-to-one with Windows. That will make things a bit more difficult in my code, and require me to write more cover routines to hide the differences between operating systems.

Then I run into this document which makes the whole situation sound pretty horrible. For one thing, I did not realize that there are ambiguous situations in Unicode. For example, some languages have multiple accent marks that can be added to a single letter. If you add accent 1, then accent 2, it produces the same letter as adding accent 2 then 1, but different UTF-8 strings, different file names, etc. Strings have to be "normalized" somewhere, and it's not clear if Windows and Linux and Mac would even do the same thing.

It's also not clear to me whether I have enough information in font tables to display these strings correctly or move a cursor over them in a text input field. And again, with no way to test, this is a hard problem. I also don't know what to do about sorting Unicode file names in a directory list, etc.

I haven't done any research on the handling of Unicode on the Mac. Some forum comments that Google brings up show people having problems with file I/O or editing files imported from Japan, etc. The FreeType documentation mentions both Unicode and "Apple Roman" character tables.

I'm not looking forward to messing with this on all three platforms, but I feel I have to do something. I'll probably switch to wide characters for my internal data structures and UTF-8 for files. Input and output on the various platforms will depend on what is easy. I don't want to turn this into a project in itself.

If anyone knows of sample code that reads a Unicode file and displays it with low-level text (not use of a system control), for Windows, Linux or Mac, please point me to it.

Home       About Me       Contact       Downloads       Part 45    Part 47   

blog comments powered by Disqus