The Animal Farm

July 29th, 2010

Abe Lincoln and the Velociraptor

I wrote up this pitch document late last night for a new game idea. The premise is that Abraham Lincoln travels back in time and meets a velociraptor, and the two fight crime together.

I don’t like to brag, but it’s solid gold.

Bugs remaining: 3.

July 26th, 2010

Dredging Up the Past

I’m sitting up avoiding sleep for no bloody good reason, and I realized that the site’s six year anniversary passed without mention. If you look at the ‘Archives’ on your right, you will notice that they go back as far as May 2004, and May 2010 recently passed us by. For newcomers and people with bad memory, the site is actually considerably older. A good chunk was lost in a tragic WVU-servers-suck accident, thus the exact birthdate is lost to time.

Before I go on: six years.

And still no more readers than we had on day 1.

It’s traditional - and by this I mean it may be traditional, but I honestly don’t remember - for me to recap the year’s major events. A good portion of this year is blurry, though, and it’s too late for me to go rummaging through the archives. I’ll recall what I can and interject wild fallacies where appropriate.

(1) Zach gets married. I don’t know if he actually mentioned this anywhere. I think he did.
(2) Zach vanishes off the face of the earth. His last post was over four months ago.
(3) It is discovered that Zach was eaten by a boar.
(4) My employment at Emergent ends.
(5) My employment at Sparkplug begins.
(6) I release two games on XBLIG and one on Android.
(7) I get super rich and start living on a yacht.
(8) The yacht is attacked by pirates and sinks. I rejoin my life as a land-lubber and live in a cushy apartment in Durham.
(9) I discover a board game obsession. I take over hosting of the weekly Chapel Hill/Carrboro board game night.
(10) Ricky signs on as a co-author to the blog.
(11) Ricky vanishes. We are questioning local boar for answers.
(12) Indie films! Small affairs to advertise one of the XBLIG games, but fun all the same.
(13) I break down and join Twitter. My mother is forever shamed.
(14) I watch lots of movies but review little. Still afraid of that one time six years ago when Zach snapped at me. I still maintain I am right about all the movies I’ve ever discussed, however.
(15) Don’t think for a moment that, although I had fun with Inception, I can’t pick it apart to death.
(16) I become a writer for 4 color rebellion.
(17) Kinda. I wrote one article with the promise of more, and no other articles have materialized. I still maintain hope.
(18) I start playing the Ukulele. That is not a lie.
(19) Was it this year that I started playing the drums? Time kind of munges together, but that sounds right.
(20) I form a rock-jazz-fusion band called Wicked Kite Flier/Girls in Skirts on Bikes. We tour the nation and various other nations. The word for that is “international.”
(21) I learned to cook a handful of meals.
(22) Lots and lots of technical posts. Seriously, too many.

I think you get the point. A lot happened. Or not a lot happened, but I can stretch it out and insert lies to make it seem like a lot happened.

And now it’s time for me to stop avoiding sleep. Good night, and let’s hope for another action packed year full of suspense, intrigue, and intensity.

I really wanted to end this post with a mock TV show trailer phrase: “And this year. Someone. Will. Die.” But I don’t want to jinx it. Please nobody die.

July 25th, 2010

A Look Into the Iron Heinrich Material System

Though all of Iron Heinrich’s gameplay takes place on a 2D plane, the entire rendering system is 3D - think Viewtiful Joe or Shadow Complex. This allows us a visual style that isn’t possible with 2D, and it also affords us lighting, shadows, camera tricks, and other neat things. It is, however, much more complex.

There are plenty of points of complexity, but in this post I’m going to focus on the material system. The material system is designed to allow the artist, Nate, to setup the visual style of characters and levels.

The goal of the material system are as follows:
(1) Completely data driven. If Nate wants to change a shader or a shader parameter, he shouldn’t have to talk to the programmers at all.
(2) It must support setting shader parameters that are hand-specified by the artist as well as parameters that the engine delivers.
(3) It must support multiple render passes to make things like full-screen post-fx simple and, again, data driven.

What it will not support:
(1) Dynamic shader generation. There’s plenty of work that could go into writing uber-shaders or fragment based shaders, but our shader requirement is minimal (we hope), so for now Nate will be driving those all by hand.
(2) Rapid iteration - Nate will have to restart the application to see his material changes. We may be able to shoe-horn this feature in, but it’s not in the design.

Materials, Material Passes, and Render Passes
There are three primary players in the system, which I’m going to overview briefly and then talk about in more complexity.

Materials are the top-level item, and each model has its own Material - for right now only a single Material, though that may change later. At their core, Materials are just a collection of Material Passes.

Material Passes are where most of the data gets specified. These specify the shader effect that gets used along with any parameters.

Render Passes represent each pass of the renderer. During a Render Pass, the Material Pass of the same name is evaluated. Render Passes themselves control the render state, the render target, and the current camera that gets used.

Now let’s talk about each component with a bit more depth.

There’s not much more to say about Materials. They are a collection of Material Passes, which do all the heavy lifting.

Material Passes
Material Passes do the bulk of the work. During the evaluation of a Render Pass, Material Passes of the same name as the Render Pass are selected to be executed - only if they exist. If a model doesn’t have a Material Pass for the current Render Pass, it won’t be rendered.

The Material Pass specifies two primary things:
(1) The effect (shaders) that will be used.
(2) The parameters that will get passed into the shader.

#2 requires more elaboration, since Material parameters take on a lot of forms. In general, there are two kinds of Material parameters:

The first kind is a hand-specified value. This could be a uniform float, the file name of an input texture, or a matrix of some kind. They’re typed key-value pairs, where the key is the name of the shader parameter (the uniform) and the value is, well, the value.

The second kind is an engine-specified value, what I call an ‘auto’ parameter. This could be a dynamically generated texture, the current camera matrices, or the name of a light. These are just names of parameters that the engine handles, and when one is seen the renderer retrieves the appropriate value and passes it on.

Render Passes
A Render Pass is a single run at rendering the scene. During a Render Pass, the scene collects all the Material Passes of the same name for rendering, sets up the render state/render target, and renders all the models that have the appropriate Material Passes attached to them.

Render Passes keep track of the following:
(1) The render state. These are various parameters that are used to do things like turn on/off culling, change blend modes, etc.
(2) The render target. This is a named target that can be rendered to and then used later as a parameter specified in a Material Pass.
(3) The size of the render target. We don’t want to always draw to screen-sized render targets, since sometimes that’s a waste of memory.
(4) The camera. Each Render Pass can render from a different camera. This is necessary for doing things like good water rendering.

Use Case #1
That’s a big wall of text and no doubt hard to grasp, so I’m going to provide a couple use cases so you can see how this all plays out.

This first use case will be rendering a scene to a color buffer and then applying a post effect to do a blur. It’s not terribly hard:

We have two Render Passes with the following parameters:
Render Pass RP1:
Render State: Default
Render Target: TextureA
Camera: Default Scene Camera

Render Pass RP2:
Render State: Default
Render Target: Screen
Camera: Orthographic Screen-Sized Camera

At this point, the entire scene is filled with normal entities. These entities all have very generic Materials, where each Material has a single Material Pass - named RP1, which designates that generic rendering will occur during the first render pass.

There is also a special item in the scene - a full-screen quad with a Material possessing a Material Pass named RP2, indicating it will only be rendered during the second Render Pass. This Material Pass has some special parameters:
Texture: TextureA
Effect: BlurEffect

Thus, after the entire scene is rendered, the quad will be rendered to blur TextureA (and since it is part of RP2, the result of that blur will go to the screen).

Use Case #2
This will be a toon effect and has the following Render Passes:

Render Pass RP1:
Render State: Default
Render Target: TextureA - a color texture
Camera: Default Scene Camera

Render Pass RP2:
Render State: Default
Render Target: TextureB - a normal texture
Camera: Default Scene Camera

Render Pass RP3:
Render State: Default
Render Target: Screen
Camera: Orthographic Screen-Sized Camera

The Materials are largely the same as the first use-case. Even the full-screen quad is nearly identical, though it gets passed in both TextureA and TextureB and has a different effect.

The system has room for improvement. A shader fragment system would be nice but is a bit beyond our needs. It also necessitates writing Materials just to throw in test objects, which can be cumbersome (we have a ‘test’ Material to slap on for those instances). Binding Material Pass Names and Render Pass Names together is perhaps a bit of a mistake but works for our needs. Documentation also needs to be in place to remember all the different effect parameters and such.

Flaws aside, the system has been treating us pretty nicely. Nate can autonomously add complicated shader effects and change the rendering system at his whim. Later, it shouldn’t be hard to write tools to make the process even smoother.

I hope you don’t mind, we’ve reinstated your license to kill.

July 24th, 2010


Another Joss Whedon show I like. How shocking. <-- That's sarcasm, just so we're clear.

I wasn't expecting much out of Dollhouse. The premise seemed shaky at best, and Eliza Dushku's performance in Buffy was stomach-wrenchingly abysmal. Though I am a huge fan of most of Whedon's works, the existence of Buffy Season 6 and Angel shows he is fallible, and thus I waited a good long while before bothering.

I shouldn't be surprised that my expectations were wrong. Scratch that. My expectations are generally solid. All the same, they were wrong here. It's one of Whedon's best works. Not Firefly quality, but on par with the memorable Buffy seasons.

Like all of Whedon's successes, the show understands the necessity of creating good characters. It's hard to do any proper development when half the cast takes on a different persona each episode, but the supporting cast picks up the burden splendidly. Topher, the charming mad scientist, spearheads some excellent comedy in an otherwise somber toned show. Adelle does stalwart matron without missing a beat, and Ballard (at least initially) perfectly executes the struggling FBI agent fighting against all hope. Dr. Saunders and Boyd are both somewhat flat, though they serve their purpose adequately.

The dolls themselves are largely forgettable, even Dushku's character - Echo never quite established enough of a personality for me to care about her, and her original self wasn't particularly notable or likable. The standout here was Sierra, who took on some interesting personas and beyond that was a well-developed, interesting person. Plus she had an accent a good majority of the time, which always makes my heart do funny things.

Moving past the characters, the story itself was admirable. There were some expected twists (November's initial role being the most obvious), but happily a lot of cliches were avoided. I was especially pleased that there were absolutely no government conspiracies. The first season's antagonist Alpha was inventive and menacing. The LA Dollhouse's growth and its response to the Rossum corporation played out naturally, and it never felt like anything was forced/rush. Aside from the end, where the "villain" is revealed - his motives were a tad contrived. The leaps into the future were beautifully done and managed to provide some closure to a show canceled before its entire story could be told.

In general, I would say there were a lot of top-notch story arcs, with my only real complaint here being the lack of strong villains - they're revealed too late to properly antagonize and typically do too little. This is something Buffy did very well, so I'm a bit surprised Whedon didn't carry that trend forward.

In terms of production quality, the show doesn't have any hiccups. Some of the fight scenes are done extremely well, with Ballard hurting a lot of poor folk. The smattering of special effects were fun, and the "forward leap" episodes did a lot with a little.

I recommend this show. It probably won't make a Whedon fan out of someone who didn't like Firefly, but those someones aren't very smart anyway. It stays true to his general quality of work, and although it's not his best, it's not far off.

Oh, and Eliza Dushku does a much better job of acting. It's clear she's grown quite a lot.

Have you seen my cabinet of inappropriate starches?

July 17th, 2010

Recycle Array

When I was writing Word Duelist, I was scrambling for a way to improve performance and reduce garbage generation. One area where this was prominent was the particle system, a system which would repeatedly generate lots of light-weight objects and discard them soon after.

After doing a little research, I stumbled upon a handy little data structure that works like the following:
Creation: Create a fixed pool of memory (with a maximum size) with pre-allocated ‘dummy’ objects.

Addition: Get an item from the next free spot in the pool and let the caller fill that item out appropriately. One important side-effect is this may involve exposing more of your object than you’d like since you won’t be using the constructor again.

Removal: Swap the item to be removed with the last valid item in the array and decrement the size.

The data structure has the following characteristics:
(1) All memory is allocated up-front.
(2) O(1) item “addition”
(3) O(1) item removal
(4) O(N) search, O(1) random access
(5) No garbage generation. We’re working with a fixed pool where items don’t actually get deallocated until the array is destroyed.
(6) Unstable - the order of the objects will change

#6 is worth mentioning - the data structure does not provide any guarantee that the order you add items will be the order those items are in at any time. Thus, this is best used for something where you don’t care about the order, like a particle system.

Here’s a (admittedly rough) mockup implementation:

public class RecycleArray
public delegate T CreateRecycleArrayItemDelegate();

private T[] mItems;
private int mMaxSize = 0;
private int mSize = 0;

public int Size { get { return mSize; } }
public int MaxSize { get { return mMaxSize; } }


/// Creates the array, filling it using a delegate which creates new objects of the generic type T

public RecycleArray(int maxSize, CreateRecycleArrayItemDelegate creator)
mMaxSize = maxSize;

mItems = new T[maxSize];
for (int i = 0; i < maxSize; ++i)
mItems[i] = creator();


/// Creates the array, filling it with an initial set of items

public RecycleArray(T[] initialItems)
mItems = initialItems;
mMaxSize = initialItems.Length;

public T NewItem()
if (mSize == mMaxSize)
throw new IndexOutOfRangeException(”Can not add a new item to the array. It is full.”);

return mItems[mSize++];

public void RemoveAt(int index)
if (index >= mSize)
throw new IndexOutOfRangeException(”Can not remove item at this index. It is outside the array bounds.”);

// Swap the item being removed with the last item in the array
T temp = mItems[mSize - 1];
mItems[mSize - 1] = mItems[index];
mItems[index] = temp;


public T this[int index]
if (index >= mSize)
throw new IndexOutOfRangeException(”Can not get item at this index. It is outside the array bounds.”);

return mItems[index];

Just something I slapped together in 20 minutes, so there’s clearly room for improvement. If someone has a better name than “Recycle Array” I’d be happy to change it - I’m not terribly happy with the name.

Found some awesome games in the XBLIG playtest queue.

July 16th, 2010

See the Light (Android) Breaks 1000 Trials…

…Still only 16 sales.


July 16th, 2010

Weak Writing

I’m not a professional writer, but I spend many contemplative hours wondering how to improve my written and oral communication.

I’d like to hammer on an ugliness in writing that I caught myself repeating continuously in the last post: the phrase, “I think,” plus its brethren, “I believe,” and “In my opinion.”

It is, in short, a weak phrase; a sign of insecurity in the words. It’s almost universally unnecessary.

The fact that you are declaring something means you think it, whether it is something you can demonstrably prove or something which is rooted in logical arguments, belief systems, or personal observations. In either of these instances, the only purpose of the phrase is to soften the language - to purposefully seem less than certain. Consider:

(1) I think a course like that would be useful.
(2) A course like that would be useful.

Phrase #1 seems easier to palate - it’s injecting the human author into the writing and creating an entity which can be ignored, opposed, or supported. Phrase #2 lays out a blanket statement, leaving the human unexposed and letting the writing stand alone, though any rational person knows there’s an author behind the work. They are both identical in function, but #1 comes off unconvinced whereas #2 says what it has to say without reservation.

Softening phrases plague forum posts, where without careful wording it’s trivial to incite a flame war. After you’ve spent enough time regularly communicating with others and learning all the clever tricks we use to avoid offending people, it’s hard to move into a mindframe where it’s OK to just say things and let it be understood that, yes, this is your belief, but it doesn’t need superfluous decoration to make a point safely. In the end, such phrases hurt the writing, both by padding it with fluff and by stripping away its apparent conviction.

You’ve failed me yet again, Starscream.

July 16th, 2010

Practical Computer Science

Computer Science programs generally don’t train software engineers; they train a mix of quasi-mathematicians/engineers who happen to know programming languages. This is fine - there are skills learned here that software developers need to know to be effective and to really grok software systems, but there are a ton of essential points that are left as exercises for the student. As a result, a lot of graduates are left at a disadvantage.

I’ve thought about what is missing in regard to education, and I think a lot of it could be tackled by one focused course. I’ve talked about this topic before, but I stumbled on a blog post by ex-Emergent TD Shaun Kime that has me thinking about it again. I honestly only glossed over his post, so some of what I write here may be an inferior version of what he’s already written, but I wanted to think it through originally without inadvertently yanking his ideas.

Anyway, here are some of the topics I think need coverage that don’t get nearly enough:

(1) Debugging
Would you believe I didn’t use a debugger competently until my last year of undergrad? Would you believe that I was actually lucky here - that a lot of my colleagues had graduated without ever properly using one? Would you believe it wasn’t until working at Emergent some three years later that I learned of some of the more powerful features like data breakpoints and conditional breakpoints? I don’t want to think about how many hours I wasted hunting down bugs through careful placement of print statements, and this could have been alleviated with an hour’s introduction. Plus debuggers integrated into IDEs are the most useful tools in a developer’s toolkit. Which leads me to my next item.

(2) IDEs
Too many universities stick strictly to a Linux command-line. Which is interesting, since I’ve never once been in a job where I worked primarily with the command-line. Visual Studio is one of the grandest developer tools in existence, and there wasn’t a single WVU professor that talked about it. Or Eclipse. Or XCode. Or Codeblocks. Or anything. If I didn’t develop heavily outside of school, I would’ve thought emacs was the only environment out there. And I would never get hired.

(3) Revision Control
Every project I write goes into a revision control repository of some kind - SVN or Perforce or whatever. Revision control tools are tools that every competent software company uses. And they aren’t entirely trivial - there are topics like merging and branching and forking that generally require a little bit of hand-holding the first few times. They help more with larger projects, but I’m of the opinion every developer should adopt them for solo projects as well.

(4) Feature/Bug Tracking
Real companies don’t have giant TODO lists scribbled on paper somewhere. They have databases that keeps track of too many things to fit on a tree’s worth of paper - features that must be implemented, features that are on a wish list, bugs, burn rates, time estimates, etc, etc. These databases typically aren’t hard to use, but they can be hard to setup and administer properly. At any rate, students should at least have an idea of what’s out there.

(5) Profilers
This is where we start wandering outside the realm of “absolutely necessary” and into “you may go a while without touching this, but knowing about it would be very good.” Profilers allow you to benchmark your system - to monitor your memory or CPU usage or allocation frequency or various other statistics - so you can find out where your code sucks and then fix it. They’re used a lot during the development of real-time systems like, say, games. Or operating systems. Or user tools.

(6) Build Systems
When you wander outside the happy world of single-platform development, you start running into nightmares. How do you keep every platform up-to-date? How do you ensure that some series of tests fires every night to verify that your system is stable across each platform? How do you reconcile the differences between building for Linux or Windows or Mac? Well, there are tools out there to help.

(7) Basic Web Development
Everyone should know enough HTML and CSS to make a serviceable web page. Everyone should also know how to get web hosting, grab a domain, set up blogging system of some sort. Run some simple commands on a database (everyone need not be an SQL guru), install a Wiki. In a big company you may never use these skills, but if you decide to go solo you won’t survive without them.

(8) Basic Scripting
One of the first things I learned in Tim Menzies’s Open Source class - gawk is pretty awesome. One of the first things I witnessed at EA and Emergent - scripting languages find their way into a whoollleeee lotta stuff. One of the second things I witnessed - batch scripting systems can quickly become a nightmare. It comes down to this - scripting can save a lot of time (for tasks that it is well suited to). It can also become a tangled mess that, as a junior programmer, companies will love throwing your way. In both cases, it behooves you to actually know what you’re doing.

(9) Team Communication Techniques
I nearly left this off the list. Things like message boards and wikis and mailing lists and Skype are second nature to most software people, so much so that setting them up is often instinctively the first thing we do for projects. But CS students aren’t necessarily software people - not yet, anyway - and frequently students aren’t used to working in groups larger than 4. Best to cover this base just in case.

Those last two statements are the primary impetus for a course like this. It’s about exposing people to a wide variety of industry-utilized stuff; it’s not about making people experts in anything but instead facilitating them to know where to look to solve problems and become experts as necessary.

The actual format of such a class is a topic for another post, but it would almost certainly be project based. Menzies’s Open Source Software class (a graduate level class, mind you) hammered on almost everything in that list and required utilization via a term project, which worked fairly well, though it suffered some growing pains.

Hopped up on sugar and another post planned.

July 5th, 2010

One Step Back, 1.5 Steps Forward

PVRTC for the iPhone/iPad sucks, but it’s a necessary evil.

PVRTC is a texture compression format. It’s a fixed-ratio block compression format meant to be decompressed by graphics hardware, hence allowing you to store more texture data in the limited texture memory available on the hardware. PVRTC also has the distinction of being the only compressed texture format available on the iPhone & iPad.

So then why does it suck? Two reason:

(1) PVRTC is not good with gradients or sharp edges. Uhhhhh. OK. This may leave you with the question, “Well, what is it good with?” And I present to you my answer: “I don’t know.” Depending on what compression tool you use, PVRTC compressed images can be OK or they can be unbearably bad. I find the official Power VR tool for Windows to be reasonably good and not too slow.

(2) PVRTC only supports square power-of-two (pot) sized textures. This is the big one. The pot texture restriction has been common-place for some time now. The square restriction, however, is a bit of a problem.

Allow me to expand on #2. The problem comes when you’re dealing with a sprite based game. Sprite based games tend to have various sprite sheets that contain all the animations for the sprite - or alternatively (and badly) each frame of animation is its own image. In either case, you very rarely have images that fit neatly into a square pot-sized image.

You have a couple of options here:
(a) You can pad the images to the nearest available size.
(b) You can break apart your sprite sheets so that they’re close to available sizes, and then pad those.
(c) You can devise a clever scheme for packing images together to get the packed images close to an available size and then… pad those.
(d) You can resize your images to make them larger/smaller. And then pad as necessary.

Unfortunately, pretty much all of these solutions have accompanying problems.
(a) You’re likely to end up with some pretty big images and a pretty huge waste of space. If your sprite sheet is, say, 512×1024, it now becomes 1024×1024. It doubles in size.

(b) Breaking apart sprite images can be pretty time consuming. Especially if you’re doing it after-the-fact; by the time you actually run into a memory problem, you’ve created a lot of work for yourself.

(c) In all likelihood, you’ll be loading more in a section than you need to. If you pack two images together and you find that you only need one during the level, you don’t have the option of only loading the one. Plus this has the same problems as (b) - it can be a lot of work.

(d) No artist wants to see his carefully crafted image scaled unnecessarily, losing detail. And in this case you’re either losing detail or using more memory, none of which you want.

If we ignore added workload, most of these drawbacks come back to the same bit of text that repeated itself while I presented the options - “pad the images.” Whatever you choose, you’re likely going to be padding images significantly - making them powers of two may not be a huge deal, but making them square will likely be painful. You’re looking at doubling a given dimension for a lot of images. Ew.

How bad is this really? Well, obviously when you’re fighting for every bit of memory, it’s very far from the ideal. It may, however, not be a deal breaker. It turns out that in terms of FILE SIZE, padding a PNG and then compressing it using PVRTC can do anything from make it slightly smaller to slightly larger (depending on the amount of padding, of course). Taking a 512×1024 PNG, padding it to 1024×1024 (ew ew ew), and then running it through the Power VR compression tool will usually generate a larger image; I don’t remember numbers off-hand, but I think it was about 300KB - 500KB larger.

In terms of MEMORY USAGE, though, you’re likely to still get a big win. PNG gets completely decompressed before going to hardware. I took a 2MB PNG and decompressed it in GIMP to export an 8MB raw image. PVRTC does not get decompressed in the same fashion, so if you have a 2MB PVR image, the hardware is receiving 2MB. Of course, that win is bigger when you don’t have to add a huge empty buffer onto your images, but it’s clearly much better than nothing.

I do look forward to the days when Apple drops the square size restriction.

Did I inadvertently sign up for a Becca-themed newsletter?

July 1st, 2010

Scalable Art Via Manifests

In a lot of code - my code even - you see statements like this a lot:

DrawImage(x, y, image.width, image.height)

Basically, draw the whole image. This seems like a perfectly harmless statement, but it doesn’t scale. The problem portion is that we’re keying off the image’s size.

“But Brian,” I hear you start as I look upon your pouty face, “I want to draw the whole image. What else would I do?”

The answer is that you’d still specify the asset’s source rectangle ala some manifest file and keep that file up to date.

“But Brian,” you continue, that ugly expression still present, “That’s extra work for no perceptible gain.”

And here’s where I talk to you about the perceptible gain:
Things go wrong. One day you’re going to run out of memory or you’re going to want to port the game to some other platform. You’ll find that your images really need to be square powers of 2 for your texture compression to work (stupid PowerVR), and so you add some padding only to find that your previous code does not support the change. You’ll find that you need to pack a lot of little images into one big image to help with compression or load times or texture thrashing or whatever, and so you do that only to find your previous code does not support the change. You’ll find that you want to move to a platform that doesn’t support large images and so you have to break up your big images into smaller ones, and so you do that only… you get the point. Now you’re faced with a lot of extra work and a lot of potential bugs.

“But Brian,” you’re really irritating at this point, and I just want to slap you, “an extra file means another place to forget things and add errors.”

That’s reasonable but not terribly burdensome. If you need to add files to the manifest before you ever see them in-game, then when you forget to add a file you’ll spot it almost immediately (assuming you test like a good engineer). If it’s really too much for you, there are certainly ways to auto-generate the manifest depending on how you do texture packing/padding.

My next post will be on PowerVR and why it sucks.

Entering Apple Bashing mode…