A work-in-progress free and open-source replacement for the Diablo I engine. Simply import the Diablo assets, and enjoy the same old game with faster performance and modern resolutions, and first class support for mods.

What's this all about?

We love Blizzard's popular game, Diablo. We love it so much, in fact, that we're willing to spend our precious time developing a free and open source solution for those wanting to play it on a modern computer.

Read more...

Is that legal?!

Short answer, yes. We don't distribute any copyrighted game assets, which means you'll need to have a copy of Diablo to be able to play.

  • April update - graphics and scaling

    Hi all, Over the last month or so, I’ve been mainly focused on cleanining up the rendering code in freeablo. This doesn’t make for very exciting stuff to show off, but it’s important in the long run. The new approach works by pre-loading all of the game’s sprites at startup, which should eliminate any stutter when doing something that needs a new sprite (like changing your equipped items). Framerate is also much improved by the many optimisations taken care of along the way.

    Once I was done with general cleanup + optimisations, I finally implemented zooming, which was sorely needed. Freeablo looks great in 4k, but it’s a little impractical to play a whole game that way!

    Next steps for graphics are scaling the GUI, and there are still a lot more possible optimisations on the table, but for now I will probably move on to more gameplay-related changes (specifically ranged combat).

    If you're a programmer, click here for more details about freeablo's rendering code

    Prior to this refactoring, freeablo’s rendering was a nasty sprawl of ad-hoc opengl and SDL related code in a huge file called sdl2backend.cpp. The first step was to abstract the rendering code into something cleaner, which also lets us swap out opengl down the line as well. The system I went for is pretty simple, you have a RenderInstance class that is in charge of creating the GL context, as well as resources like buffers and textures. RenderInstance is subclassed into RenderInstanceOpenGL, but the application only uses the base interface from RenderInstance.

    Down the line, there’s a decent chance that we will want to add a vulkan renderer, so there is some inspiration taken from that API here. For example, the RenderInstance is not in charge of dispatching draw calls, that is done by a CommandQueue. In vulkan you create a command queue object, and then push commands into it on whatever thread you want. Then, you hand it over to your main graphics thread to submit it for execution on the GPU. For now, the implementation of CommandQueueOpenGL just runs the corresponding GL commands immediately, but having the interface in place should help if and when we want to add a vulkan renderer.

    As for optimisations, the big one was batching, thanks to @grantramsay for getting that started. For those of you who are not familiar with graphics APIs, fundamentally what you do is a: push vertex data onto GPU, and then b: issue commands to draw the vertices with a specific configuration (shader, blending, etc). Especially in older apis like opengl and Directx 11 and under, there is a large cost associated with issuing a new draw call, so if you can batch up items that use the same state configuration and issue fewer, but larger draw calls, that is a major performance win.

    In the end, the system I implemented involves pre-loading all the assets in the game into a series of large atlas textures. Before, each game sprite was in a separate texture, so we had to bind the correct texture for each sprite before drawing. If you draw two textures in a row that share the same texture atlas however, since you don’t need to change the GPU state (the currently bound texture, in this case), you can batch them both into one draw. This way, you can accumulate a “batch” of draws as they come in, and only send them to the GPU for rendering when you get a new request which switches the texture.

    This approach works well, but it depends on the game drawing objects from the same sprite atlas (remember, we can’t fit the whole game in one atlas texture, we have a few of them), but the ideal case would be if we could issue only one draw for each atlas texture that we used, instead of switching around as draws come in. Well, it turns out we can. If we accumulate all the draw commands for a whole frame, then sort them by texture, we can render the whole thing with N draw calls, where N is the number of separate texture atlases.

    However, we run into a problem. This doesn’t preserve the order in which the draws are performed, so we can end up with situations like the ground being drawn on top of the player. All the draws within one batch will have the correct ordering relative to eachother, but the ordering between batches will be all messed up. The solution I opted for was the z-buffer.

    Z buffers are a concept in graphics used for solving this exact problem, but normally with 3d meshes instead of sprites. How it works is, for each pixel in your mesh that gets rasterised, it also writes a value into the z-buffer, which is just an array of floats, with one value for each pixel. The value that it writes is normally the “depth”, or distance from the camera to the point on the surface of the mesh that is being rasterised. Then, when we draw a second object, it first checks whether its own depth is less than the value currently in the z-buffer. If it is, it draws on top of the previous result, and writes its own depth to the z-buffer. If it is greater, then we know the object is behind than the previous one, so we leave the current pixel as-is.

    In freeablo, what I did was, in the shader for drawing sprites, write the original index of the draw into the z-buffer (actually, a normalised value generated from the position). This means that we can use the z-buffer to sort the sprites in their original draw order, while issuing the actual draws in whatever order we want. As the sprites are drawn as textured quads, we had to take the alpha channel into account as well when writing the z-buffer (transparent pixels are effectively infinitely distant from the camera).

    In the end, this whole process resulted in a framerate bump on my machine from somewhere around 50FPS to about 700. There is still low-hanging fruit (eg, the game tiles are diamonds, but we draw them with a non-rotated square, so we’re drawing a lot of useless transparent pixels), but for now I think that will do. If you have any questions, or want to correct any mistakes I made in this post, please get in touch. I haven’t done much public technical writing before, so it would be nice to know if this was intelligible at all! You can reach me on wheybags at wheybags dot com, or PM me on the freeablo forums.


    So, that’s it for now. Stay tuned for more updates :)