2025-11-24

09:48:00, atom feed.

I had joined a startup a few months back, but I recently left. Back to my own thing at this time in my life.

When I first left industry to do my own thing, so many people asked me what was next for me and were surprised by "doing my own thing" that I had to refine my answer to make the inevitable same-conversation-many-times efficient. The common question to me was, "what would it take for me to work for someone else again?"

My answer was, simply needing money aside, three criteria.

  1. The company is a small group of experts.
  2. The group has a single, measurable mission.
  3. The mission and tasks resonate with me.

I've since added a fourth criteria: everyone is humble.

Also, I've created a new rule for myself for any interview I participate in from here on out. That rule is, if I want to join a company, I need to first come up with my own plan and explanation of how I would accomplish their mission, end-to-end, meaning from hiring people and starting with an empty text file in front of me as a software engineer, to shipping and maintenance. It's most important to run this by company leaders to get a sense for their familiarity and experience in the domain and also to get an early read on points I might disagree with or things I might be wrong about and expectations I need to adjust.

With that out of the way, here is my next development update. I was previously writing development logs mostly daily, but now I think I will do so weekly, unless there is some singular topic I want to on about at length. It's Monday already, but I'll be posting about last week. Also, it's a three-day weekend here in Japan, but when you work for yourself and have no income yet, there is no such thing.

I'll experiment with posting a slim log and then touching on this and that. The command I'm using to print the log is

git log --pretty=format:"%ad%n%B" --date=short --reverse
2025-11-15
docs: initial commit

2025-11-15
feat: open a window, attempt FPS target

2025-11-17
feat: render a simple map the size of the window; add build docs

2025-11-17
feat: introduce create/destroy entity function; create player

2025-11-17
feat: implement basic movement and collision

2025-11-18
tweak: double running speed in tiles per second

2025-11-18
feat: introduce GameMap struct

2025-11-19
feat: add camera that follows player; also add clip rect on main window

This fixes a rendering slowdown caused by extending the map size in an
earlier commit and rendering the entire map--even portions out of the
frustum--on each frame.

2025-11-19
docs: index documentation and add docs on coordinate systems in use

2025-11-20
chore: switch tile size to 16 pixels to anticipate maps; extend docs

2025-11-21
feat: add small paths and strings library

2025-11-21
feat: add aseprite to binary map conversion wrapper tool

2025-11-21
feat: add a map parser; this commit does not include assets/, data/

2025-11-21
feat: add string view and parent path utilities

I restarted the codebase. I did this mostly to refamiliarize myself with what will become the entire implementation. I'm still using SDL3 and, by consequence, CMake. I considered switching to Jai or Odin but stuck with my favorite dialect of "C++," which is C with operator and function overloading. Aside, this podcast had some hilarious parts, where the host was hoping for hot debate around what C could do better and where it shines, didn't get it, and was getting jokingly visibly frustrated when the guests mostly kept agreeing with each other about how bad C was and why they nonetheless stick to most of its core principles.

Currently the game is entirely tile-based with tile-based positions and movement. Entities are just a struct of a position and sprite, in that order. Collision is a simple can-move-to-world-tile check at this time.

I had never implemented a camera before, so that was interesting. Being real with you, it took me an embarassingly long amount of time to conceptualize the transformation in my head. My understanding is that there are two common types of camera setups in top-down 2D games, and they differ in how the camera position is tracked. One setup tracks the camera position in screen pixels as the top-left corner of the screen (if your render system places (0, 0) at the top-left of the screen. The other setup tracks the camera position in screen pixels as the center of the screen. In my case, I track it at the center of the screen. This is so that if I want to "watch" any particular tile or entity, all I need to do is set the camera to that entity's position. Currently I do so for the player, so the camera tracks the player as the player moves.

Regarding the transformation, consider the following scenario:

+-------+(0,0) px
|
v                        Screen
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
X  |                                                        X
X------------------ screen width ---------------------------X
X  |                                                        X      Game Map
X  |                     +----------------------------------X----------------+
X  |                     |                                  X                |
X  |                     |                                  X                |
X  |                     |                 o                X                |
X  |                     |                 ^                X                |
X  |                     |                 |                X                |
X  | screen              |                 +---+tile to     X                |
X  | height              |    o                 render on   X                |
X  |                     |    ^                 screen      X                |
X  |                     |    |                             X                |
X  |                     |    +---+camera pos               X                |
X  |                     |                                  X                |
X  |                     |                                  X                |
X  |                     |                                  X                |
X  |                     |                                  X                |
X  |                     |                                  X                |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                |
                         |                                                   |
                         |                                                   |
                         |                                                   |
                         |                                                   |
                         +---------------------------------------------------+

Assume we want to render the point labeled "tile to render on screen." To do this, we need to find that tile's position in screen space. The tile has a position in game-map tile space (Px, Py). The camera has a special property of having a position known in two frames. It has a game-map position (Cx, Cy), and we also know it has a fixed position in screen space in the center, which is (screen width / 2, screen height / 2). The formula to transform the tile to render from game-map space it screen space (Psx, Psy) is thus

(Psx, Psy) = ((Px, Py) - (Cx, Cy)) * (tile width px, tile height px) +
             (screen width / 2, screen height / 2)

(Px, Py) - (Cx, Cy) gets the tile position in game map space relative to our camera position. If the camera were at (0, 0) in game map space, the position relative to the camera would simply be (Px, Py). Multiplying by (tile width px, tile height px) transforms the game-map tile coordinates into game-map pixel coordinates (the coordinate now points to the upper-left corner of a particular tile). Finally, to transform game-map pixel coordinates into screen pixel coordinates, we need to add the camera offset. Again, if the camera were at (0, 0), we would be done.

This transform can produce coordinates outside of the visible screen space. As one of my commit messages above suggests, if we don't avoid rendering outside of the screen space, we will pay the cost of rendering. I found with SDL3 that even if I set the render clip rectangle to the screen space, I still run over my frame times. So, I also have some logic to determine the subset of the game map to consider for drawing to the screen on a given frame, and then I apply the transform shown above to convert visible tiles to screen space.

On to game map design, I will stick with pixel art and Aseprite. I made a small patch to my Aseprite tilemap binary exporter to adjust the exported file names, and then I introduced a Python utility on my game-code side to transform .aseprite files to binary map files. The script ultimately transforms as follows:

test-map.aseprite -> .
                     ├── test-map/
                     │   ├── tileset1.png
                     │   └── tileset2.png
                     └── test-map.bin

test-map.bin is what my game code parses to load in map data, and resources corresponding to "test-map" are stored in a folder of the same name peer to the binary file. Paths in test-map.bin are relative to the location of test-map.bin. Avoiding absolute paths in this case is for portability, especially since the resource location on disk in the source tree is different than the location resources are placed after a build. I put my resources in a data folder, map resources in data/maps in particular. To accomodate relocation in CMake, I have

add_custom_command(TARGET the_target POST_BUILD
  COMMAND ${CMAKE_COMMAND} -E copy_directory_if_different
    "${CMAKE_SOURCE_DIR}/data"
    "$<TARGET_FILE_DIR:the_target>/data"
)
add_custom_command(TARGET the_target POST_BUILD
  COMMAND ${CMAKE_COMMAND} -E copy_directory_if_different
    "${CMAKE_SOURCE_DIR}/data/maps/"
    "$<TARGET_FILE_DIR:the_target>/data/maps"
)

This requires CMake 3.26. The annoying thing is that copy_directory_if_different does not do a recursive comparison, so I need to check the top-level data directory and data/maps directory explicitly. Also, content removal from the source directory does not count as a difference, meaning that previously-removed resources can linger around in the build output. I could remove and re-copy all resources on every build, but until I hit some gotcha, using an old resource by accident, I won't bother.

SDL3 provides a convenient SDL_GetBasePath which allows me to get the path the executable is run from. This allows me to then discover my executable-relative resource directory and correctly resolve the relative paths in the map binary files.

Something like std::filesystem would be handy here, but I'm avoid std:: as much as I can, so I have reimplemented just what I need. Toward this, I introduced a simple string view utility. It's trivially defined as

struct StringView {
  char* str;
  size_t len;
};

where the characters are ASCII-only and the len portion of the backing string is not guaranteed to contain a null terminator. We'll see how useful this turns out to be. I mostly introduced it because I have not introduced my scratch allocator yet. If I had the latter, I'd allocate without thinking much about it, knowing the memory I allocated will be reclaimed shortly after.

Toward what I need from std::filesystem, I introduced path_concat and path_parent functions. path_parent finds the parent in a provided path.

StringView path_parent(const char* p, size_t len);

This also took me an embarassingly long amount of time to write elegantly. I won't share the code in the event you want to challenge yourself. (By the way, AI output was rather bloated and inefficient at the time of this writing.) My implementation only handles Unix path separators at this time, so this will be one implementation I need to port when going to Windows. Here are the test cases I pass, if you want to take a go at it yourself.

struct TestCase { const char* input; const char* expected; };
TestCase cases[] = {
  { "/a/b/c//", "/a/b", },
  { "/a/b//c", "/a/b" },
  { "/a//b/c", "/a//b" },
  { "/a/b/c/", "/a/b" },
  { "/a/b/c", "/a/b" },
  { "a/b/c", "a/b" },
  { "a//", "." },
  { "//", "/" },
  { "/ab", "/" },
  { "/a", "/" },
  { "./a", "." },
  { "//a", "/" },
  { "//", "/" },
  { "a", "." },
  { "/", "/" },
};

Speaking of going to Windows, one reason I'm using SDL3 is to allow myself to develop on Linux and macOS and "just" test on Windows. We'll see how that goes. I really like the idea of being able to bring my macBook wherever with me and still develop and play my game, don't really have a decent portable option with Windows at this time.

Regarding map loading and whatnot, I may ultimately bake the map data into the final binary. I may either use the recent #embed or, more than likely, just implement a tool myself to achieve the same.