Fortress Occident Developer Blog

Comments

Tiling of the World

Human memory has an interesting quirk: events in the past tend to fade and the busy, stressful times become a muddled extent of non-memories — I have years from which I remember: I think I was quite busy at the time.

When I think back a year, I feel definietly the same muddyness of ever-present programming puzzles, with one bright detour to the wonderful land of Blender.

Technical background to tiling

Our art pipeline, as explained by Rostov, starts with sketches and then a rough block-in which gets refined into a nice-looking render. Now: while we could render the whole image at a time we have decided to split the world into smaller tiles.

I don’t remember the exact discussions which led to our solution, but can still summarize one main decision.

We were thinking of doing the world in Pillars of Eternity way: essentially setting only the walkable ground planes and windows into the 3D world and then having a back-drop image without geometry, which occasionally occludes the player. Problems with that were manyfold, but mainly concerned in-world lightning and questions of what can the player see at a given time. Eventually we decided to have a simple 3D block-in world which gets a similar depth-occlusion shader.

This way we have a general understanding of geometry for the dynamic lights and visual raycasts while using a custom shader on the ground world to create the illusion of rich geometry.

World from its side

A slice of world from an unintended angle

How tiles are made

Now we were faced with an another problem: how to tell which part of the 3D world should show which tile and how to put it there?

The problem of mapping a 2D image to 3D space has been solved for a while and standard tools tend to call it UV mapping, which essentially means that every triangle of a 3D mesh gets its own location on a texture and then the texture is calculated with the right modifications.

As our renders are isometric and have a fixed angle we project them from the screen plane to the mesh. How does the mesh get its projections? Que my entrance music.

blender-app_2016-08-18_14-36-17

By hand the process would look like this in Blender. We create a cuboid which represents the orthographic rendering camera space. Then we join all the different meshes (walkables, barriers, walls etc) into one solid mesh.

Then tell the large joined objects in the scene that they should be intersected with the cuboid (which I started calling Intersector-1 since it reminded me of an old computer game).

blender-app_2016-08-18_14-36-41

Now, since our camera is animated to produce the tiles (a tile for a frame), the intersecting modifiers on every detail cut away all the pieces outside of the camera. For each of those tiles we apply camera-projected UV coordinates, so that the rendered and painted tiles match up exactly once imported to Unity. Then export an FBX file. For every tile.

This routine process is completely automatable.

Blender

Blender has been built with a decision that everything that a user does can also be done from Python. You can observe it by dragging down from the upper menu and scaling or moving the default cube. The lines starting with bpy.ops are the actual function calls and their arguments that you can type into Python console to make them happen again.

blender-app_2016-08-18_14-24-55

bpy.ops in action

While it takes some digging around, this log makes it incredibly easy to create your own plugins for automating any tedious or repetitive task.

Unity_2016-08-18_14-24-30

My own little slice of Blender

You can create operators, which are small Python classes which run a function when the user wants to. For tiling I have “copy everything to another scene”, “join everything”, “project UV from active camera”, “export frame”, “export everything” etc.

If you want to go an extra mile you can check if the context is right for calling them. And there is a nice way of creating user interface buttons — which then get automatically grayed out if you wrote the polling function right. Eventually you can even write some additional meta-information to create a plugin.

Blender has shown us its nasty side too: it appears that crashes with boolean operations, especially when objects have open meshes or exactly overlapping surfaces. Regardless of automation the artist who builds the block-in has to be careful.

Researching how to modify Blender was fun, the hack-try cycle really short and if I ever need to mess around with automating the program I know that I find it delightful. Now back to the Unity.

comments powered by Disqus