Blog
Atom feed is here.
Today a simple feature I added to the game was a target frame rate. This is typically done in one of three ways.
The first is by using timers in your game code. You capture the start time of when processing begins for a single frame, then you capture the time processing ends for that frame. Subtract the times, and if the time required to prepare the frame is less than your target frame time, sleep the thread for the difference in times.
The major issue with this approach is that the thread you're collecting timing information on can be preempted at any time by the operating system. Depending on which operating system (or language library) features you use to measure time, you may accidentally be ignoring the time your thread spent off the CPU. That time passes in the world outside the computer and so can cause inconsistencies between the time your game thinks elapsed and the wall-clock time that actually elapsed.
The second way to achieve a target frame time is by using performance counters. Performance counters are typically built into modern processors and track some kind of low-level hardware event. In this way, the counter itself is independent of operating system nuances such as scheduling. The operating system provides an API to access the performance counter value. As we don't necessarily know what is being measured by the performance counter, we don't know what the count is relative to. Thus, the counter value itself is not a measure of the passage of time. The operating system also provides an API to query the frequency of increments to the performance counter, and with this information, we can compute the passage of time.
dt_s = (perf_counter_now - perf_counter_then) / perf_counter_frequency
SDL3 provides SDL_GetPerformanceCounter and SDL_GetPerformanceFrequency for this purpose.
The third way is to externalize synchronization, to rely on display hardware to indicate when a new frame can be drawn. This is done via "vertical synchronization," also called VSync. If your GPU supports VSync and you enable VSync in your renderer, your GPU synchronizes drawing the frame buffer with the refresh rate of your display hardware (ex: your monitor). If your mornitor supports running at 60 Hz and 144 Hz, and if you configure your monitor to run at 144 Hz, then your GPU will send whatever is in your graphics buffer to the display 144 times per second, making your display frame rate 144 Hz.
To use VSync effectively in your game, you likely need to make your game loop frame rate-independent. If your player has a very fast refresh rate on their monitor but your game sometimes takes a while to draw its next frame, you might not hit your frame timing. It's no problem to draw the same thing twice while you're preparing your next frame, but you need to make sure you account for the passage of time independent of refresh rate (performance counters help with this). It helps to do general math in your game in SI units--meters, seconds, etc. This way regardless of whether you hit your frame time or not, the movement of things in your game (for example) is natural. Things don't move faster than expected on fast refresh rates and slower than expected on slower refresh rates.
SDL3 provides SDL_SetRenderVsync for this purpose.
Regarding the first and second approaches to hitting a target frame rate, they are complicated by two assumptions.
The first assumption is that your player's display hardware is some harmonic of your desired target frame time. In other words, if your target frame rate is 60 FPS, then if your player's monitor is rendering at 60 FPS or perhaps 120 FPS, you most likely will not encounter screen tearing. Your 60 Hz game loop might be out of phase of the display hardware's refresh cycle, but this means you'll always have your next frame prepared by the time the display hardware draws its next image. Also, your game's cycle time is not outpacing or falling behind the cycle time of the display hardware, so you will likely never be partway through preparing a new frame when the display hardware starts reading the frame data to draw the next image. If your cycle timing is not a harmonic of the display hardware's, your game loop will, theoretically, eventually be updating a frame while the display hardware is reading the frame buffer to draw its next image.
The second assumption is that you always hit your frame time. Even if your cycle time is a harmonic of that of your display hardware, if your game loop runs longer than your target frame time, you are at risk of screen tearing.
With all this said, unfortunately none of these three techniques guarantees smooth rendering. See this great article for why.
As for me, for now, I am using the performance counter technique for simplicity.
Today I implemented simple tilemap rendering. In my previous post, I discussed exporting a tilemap from an Aseprite file as a binary file. I parsed that binary file in my game and then used SDL3_image to load the tileset image. I was considering using stb image to do the loading, but since I'm committing to SDL, I figured I'd learn SDL3_image. If it feels too heavy, I'll switch to stb.
After building and using SDL3_image, when launching my game, the main window
would open, close quickly after, and then open again. The fix for this was
calling SDL_CreateWindow with the SDL_WINDOW_HIDDEN flag and then showing
the window only after I create the renderer (which I do separately from creating
the window).
The gist of accelerated graphics rendering in SDL is to load your media into a surface and then from a surface create a texture. In my case, I am dealing with tilemaps which themselves are built from tilesets. For the uninitiated, I'll give a quick outline.
A tileset consists of sprites. A sprite is essentially an image. Using text as
make-believe images, an example tileset might contain these three "sprites":
_.=. Using the previous three characters, we might build the map:
========
=_..___=
=___...=
========
Who knows what that represents, but it is a map built from the "sprites" in our tileset. If we were talking about actual computer graphics, our map would be some larger, rendered image. The rendered image's size in RAM would increase as the image got larger, but we know that the image (in this examlpe) is only composed of three unique sprites, those from our tileset. We could compress the tilemap by instead representing it as indices into the tileset.
22222222
20110002
20001112
22222222
The sprites in a tileset are also known as tiles. So here we have represented a tilemap by showing which tiles to use when rendering the map by referring to tiles (by index) in a tileset.
Back to SDL, this leaves me with multiple ways to actually store the tile data. Currently my binary tilemap export only encodes a single tilemap (due to the way I drew the map in Aseprite). I could load that tileset and then parse individual tiles out of its pixel memory, create a surface for each unique tile, and then create a texture for each unique surface. Nothing wrong with this. But what I did instead was create a single texture for the tileset, and at render time, I index into the texture to render only a portion of the tileset texture (the tile I want) to the window. The upside of this is that there are fewer resources to deal with, fewer allocations. The downside is that all tiles are stored in a single texture, so I cannot apply per-tile modulation (recoloring, transforming, etc.) as easily.
Here is an excerpt of code I wrote showing how I render from this single tileset texture.
/**
* Render a tilemap instance to a renderer. `tileset` is expected to contain
* the entire tileset referred to by the tilemap, and the tile properties are
* assumed to be compatible with dimensions specified in the tilemap.
*/
void tilemap_render(Tilemap* tm, SDL_Texture* tileset, SDL_Renderer* renderer) {
uint16_t tileset_tile_width = tileset->w / tm->tile_width_px;
for (int i = 0; i < tm->height_tiles; i++) {
for (int j = 0; j < tm->width_in_tiles; j++) {
uint16_t tile_index = tm->tiles[i * tm->width_in_tiles + j];
float tile_x = tile_index % tileset_tile_width * tm->tile_width_px;
float tile_y = tile_index / tileset_tile_width * tm->tile_height_px;
SDL_FRect src_rect = {
tile_x, tile_y,
(float)tm->tile_width_px, (float)tm->tile_height_px
};
SDL_FRect dst_rect = {
(float)j * tm->tile_width_px,
(float)i * tm->tile_height_px,
(float)tm->tile_width_px,
(float)tm->tile_height_px
};
SDL_RenderTexture(renderer, tileset, &src_rect, &dst_rect);
}
}
}
I haven't done much work on the general encapsulation; just standing things up right now.
Today marks my first day working full-time on my own projects. I'm focusing on building a game. Toward learning how everything works end-to-end from first principles, I'm building a game engine. I will likely ship with SDL3, though.
My game will feature tilemaps built from pixel art tiles. I love using Aseprite for my art. At the time of this writing, Aseprite supports building tilemaps but only supports exporting them using an out-of-tool script. This script exports tilemaps and supporting data as JSON. I don't want to write or depend on a JSON parser, though, so I chose to fork and extend the script to output binary.
Aseprite scripts are written in Lua, so I needed to learn Lua. I half-jokingly searched for "learn lua in 10 minutes" and turned up Learn Lua in 15 Minutes. Using this, the official documentation, and asking an LLM questions here and there, I picked up Lua pretty quickly.
Some interesting things I learned are that Lua has a way to allow one Lua file
to import another (require) which caches anything that the imported file runs
on import so that when additional imports of the same file happen, nothing
top-level runs again. There is another mode (dofile) which imports without
caching, always running anything exposed at the top-level scope in the imported
file. Another thing is that Lua only has a table data structure which doubles
as an array. Array indices start from 1, not 0. You can use negative
indicies when accessing the command-line arguments to find arguments passed to
Lua itself, not to your script.
Anyway, without showing the full glue code, I introduced this binary.lua to
the exporter script, added command-line argument parsing to the main script,
and allowed the caller to either export JSON or binary. I'll stick with binary
and write a simple parser in my game.
When calling a non-graphical script from Aseprite, you do so from the command line. You pass args before specifying the script. This is for at least Aseprite v1.3.14.3.
aseprite -b foo.aseprite --script-param bar=baz --script my_script.lua
The content of binary.lua in its initial form is below.
-- Export an Aseprite tilemap and supporting data to a binary file.
--
-- Output Schema
-- -------------
-- schema_major_ver: uint8_t
-- schema_minor_ver: uint8_t
-- schema_patch_ver: uint8_t
-- canvas_width_px : uint16_t
-- canvas_height_px: uint16_t
-- tilesets : array<tileset>
-- layers : array<layer>
--
-- array
-- -----
-- An array `array<T>` is a single `uint64_t` indicating the number of instances
-- of `T` immediately after. The instances of `T` are contiguous in memory.
-- For example, a 2-element array of type `array<uint8_t>` is laid out as
-- follows in memory:
-- uint64_t (num elements)
-- uint8_t (first element)
-- uint8_t (second element)
--
-- string
-- ------
-- A string is a single `uint64_t` indicating the number of characters
-- immediately following and then said number of characters. The characters are
-- 8-bit bytes. Each byte is a single ASCII character.
--
-- tileset
-- -------
-- image_pathname: string
-- tile_width_px : uint16_t
-- tile_height_px: uint16_t
--
-- layer
-- -----
-- name : string
-- tileset_id : uint16_t
-- width_tiles : uint16_t
-- height_tiles: uint16_t
-- tiles : array<index_into_tileset>
--
-- index_into_tileset
-- ------------------
-- index: uint16_t
--
-- Notes:
-- - Strings do not support Unicode.
-- - All numbers are encoded as little-endian.
-- - The current schema only supports a single cel per layer.
local binary = { _version_major = 0, _version_minor = 0, _version_patch = 1 }
-- Write select keys out of table `t` to the open file `f` as binary. See
-- the output schema in the file documentation for what will be written out to
-- `f`.
function binary.encode(f, t)
-- Schema version.
f:write(string.pack("<I1", binary._version_major))
f:write(string.pack("<I1", binary._version_minor))
f:write(string.pack("<I1", binary._version_patch))
-- Canvas width, height.
f:write(string.pack("<I2", t.width))
f:write(string.pack("<I2", t.height))
-- Tilesets.
f:write(string.pack("<I8", #t.tilesets))
for i = 1, #t.tilesets do
local ts = t.tilesets[i]
f:write(string.pack("<I8", #ts.image))
f:write(ts.image)
f:write(string.pack("<I2", ts.grid.tileSize.width))
f:write(string.pack("<I2", ts.grid.tileSize.height))
end
-- Layers.
f:write(string.pack("<I8", #t.layers))
for i = 1, #t.layers do
local l = t.layers[i]
f:write(string.pack("<I8", #l.name))
f:write(l.name)
f:write(string.pack("<I2", l.tileset))
if #l.cels > 1 then
error("Layer " .. i .. " has more than 1 cel")
end
local cel = l.cels[1]
f:write(string.pack("<I2", cel.tilemap.width))
f:write(string.pack("<I2", cel.tilemap.height))
f:write(string.pack("<I8", #cel.tilemap.tiles))
for j = 1, #cel.tilemap.tiles do
f:write(string.pack("<I2", cel.tilemap.tiles[j]))
end
end
end
return binary
1:14 to 1:28.
A friend shared this article with me recently, talking about how worrying about looking stupid often just stunts your growth. I agree! People should see how intentionally stupid ML models are while they're "growing," until they're suddenly smarter than Sapiens.
My friend sent me this article recently about people spiraling into delusions aided by conversations with ChatGPT.
For some reason, the phenomenon of declining birthrates in first-world countries quickly came to mind. There are many explanations for declining birthrates, but that it is mostly only happening in first-world countries seems like a well-veiled form of wireheading. We used to need kids to guarantee we have hands to work the farms and sustain the empire. Then we had kids to carry on the family lineage. Then we had kids because, who is going to take care of me when I'm older? Then we had kids just because that's what grown-ups do, right? Now having kids is increasingly relegated to reacting to biological urges and personally-held images of family. The more well-off a country becomes, the less any one individual needs to be responsible for anything critical on the community level, and things that were once critical to survival now largely come for "free," and we feel some sort of existential emptiness from not having a larger-than-self, meaningful goal to grind away towards. We often fill this emptiness with both readily-available pleasures and ultimately-meaningless, artificially-difficult quests. We're just wireheading ourselves.
The models of ChatGPT alluded to in the article presumably did not have an explicit goal to radicalize or delude anyone. The models' ability to assume arbitrary human-like personas had an un(?)intended side-effect of finding the holes in the swiss cheese of the constructs that help keep one comfortably integrated into society, the constructs that sit in front of the users' psyches. Side-stepping the question of whether ChatGPT is "good" or "bad" in this regard, it seems to me that the delusional in the article are liking the whole experience.