Before I talk about game engine architectures, I thought it would be useful to talk a little about how I understand software architecture and how it relates to games. Firstly, they (architectures) exist, no matter what they say about game development. Secondly, there is more than one. This may help you understand why the rest of the articles are written in this order, or in no particular order. At worst, when you get drawn into an argument about how disgusting (or, conversely, amazingly brilliant) certain game engines and their architectures are, you will have a couple of arguments and an understanding of what is what.
It is symbolic that the article about the architecture of the game engine appeared after the talk about strings, multithreading, and the use of algorithms: it just happens that way in life, we first write the code, the editor, the game - the backbone of the project gets fleshed out, and then we are overtaken by problems that everyone ignored because we had to deliver at least something resembling a working version. But the fact that we ignored the problems and swept them under the rug of the backlog did not stop them from being problems.
You won’t get any knowledge about allocators, containers, or the math behind the game’s physics from this article. Nor will I aim to teach you how to use A* partitioning to find NPC paths or model room reverb. Instead, there are thoughts about the code in between all of this. And not so much about writing code, but about organizing it.
Each program has its own architecture, “stuffed everything into main() and somehow it works” is also a kind of architecture, so I think it will be more interesting to talk about what makes a game engine good.
Nobody writes a game engine just like that, it is born either in the process of creating a game, as a need to formalize, secure and save individual parts of the game for further use. In this case, an editor appears, a resource assembly system, a separate input-output, a renderer is allocated to a separate system and much more - and you automatically have a group of responsible people to support the editor and engine. Usually this happens after the release of the game and the understanding that it is impossible to live like this any longer and it will not work.
The second option is that the game itself is both an editor, and an engine, and a build system that can assemble itself as ready levels and logic. All game engines and editors emerged from games, some games (and then engines-editors) became successful and teams thought to get additional time for development on the wave of success.
By this time, a person usually appears in the team (grows up, less often is invited) who defines, specifies, manages and controls the components inside the game engine, ensuring their interaction and integrity within the entire system. You can call him the Architect, but more often - this is the Elder of the programming part of the studio, who has had more than half the time to work with QA, programmers, designers and other people.
Every workday, for the last ten years, I look at the code of games, different games, and different engines, mostly large and quite mature in terms of development time. Of course, like any programmer who has been working for a long time in a certain field, I have an understanding of good design. And I think everyone has experienced moments when the code they saw was so bad that the best thing to do with it was to bury it in the comments and write everything anew next to it. The best, however, does not mean approved by the boss - old and terrible legacy code is usually relatively well tested and the locations of the rakes are known, but with the new one you have yet to find them.
Few of us are lucky enough to work with perfectly designed code. That code, project or engine that feels like a luxury American sedan from the 70s, which has everything you just need to know where it is, ready to dial a hundred with a light touch of the shoe to the floor. And so, working with different code bases, I began to notice that game engines are like the teams that write them. When a team grows larger than five people, they begin to organize their work, and often this leads to the division of tasks and responsibilities based on the technical capabilities of each team member. Everyone does what they can and know: an internal database, render, physics, editor, tests, etc. Such a division seems quite logical from an organizational point of view - each person in their place, which helps to reduce the load and increase efficiency. It seems so, but there is another effect - the division leads to the fact that different parts of the system cannot effectively interact, and development becomes difficult, because specialists work in isolated subsystems, not communicating with each other on important issues. This artificial separation ultimately makes it difficult to share knowledge and leads to weak integration between game engine components.
This relationship between the engine structure and the internal organization of the team was noted back in the 60s by Melvin Conway. Conway’s Law, which Melvin Conway formulated in the late 1960s, states that the organizational structures involved in designing systems ultimately influence the architecture of those systems. Simply put, it is difficult for organizations to create systems that would be structurally or functionally independent of how these organizations organize their communications. And this applies not only to game engines, it will apply to any software that the team makes. When programmers or architects design a system, they unconsciously reproduce in the code structure the same divisions that exist in the organization itself. If the development team is divided into several parts - one works on the front end, another on the back end, the third on the database - then the system will be riddled with the same divisions. It will be the same with game engines, it is a mirror of our communications within the studio, company or team.
To fix the software architecture, Johnny Leroy proposed changing the communications within the team. The idea is that to create a more harmonious architecture, the structure of teams and the organization must evolve in parallel with the architecture of the system. That is, instead of building a system that will reflect organizational “weaknesses”, it is necessary to change the organization in such a way that it will contribute to the creation of a more integrated and flexible system.
For example, if the architecture of a system requires tight integration between different parts — say, between gameplay and render — then rather than forcing these teams to work separately, organize them so that they interact with each other directly. This might involve creating joint teams, pair programming, or cross-reviews, which will lead to better knowledge sharing and redesign.
Unity
When I first encountered Unity, specifically the guts of the engine, not the editor’s appearance, it was, to put it mildly, a shock. I expected a sensible architecture and some kind of development plan from a game engine that conquered the mobile world, but instead I saw a patchwork of components with three build systems (2014, everything could have changed, but I don’t really believe in it), somehow put together and covered with glue in the form of mono-vm, just so it would work with a bunch of platform-specific hacks and comments in the style of “Don’t remove this space”, “Don’t compile on Independence Day” or “First compile with this constant, if it doesn’t compile, set it to 0”. To assemble the engine and editor, there was a separate wiki page with the steps (not a mistake, 117) written down , what to do after what, which libraries to assemble first, which to reassemble again at step X, which flags and options to register where and who to pray to in case of failure.
But if you look at the history of Unity, there should be fewer questions. It all started with the development of their own game GooBall, which was conceived in 2001 by students David Helgason, Joachim Anti and Nicholas Francis. Their studio Over the Edge Entertainment faced typical problems of indie developers of the early 2000s: expensive engine licenses, complex code and lack of resources. The guys made the game, but four years of struggle ended in failure, the game did not sell very well. Based on the source code of that game, those approaches and that team, Unity was born.
In 2005, Unity 1.0 was brought to the Apple conference, but without any big announcements. The engine, tailored for Mac OS X, was not supposed to take off in the world of Windows-hungry gamers. But this was its trump card: MacOS had always been popular among designers, and later was unopposed among the pioneers of mobile development for the iPhone. But the engine had something to show, instead of complex programming systems, it had a component approach: objects were assembled like Lego from ready-made blocks - physics, animation, scripts. The visual editor, rare for the mid-2000s, allowed you to literally drag and drop scene elements with your mouse. And support for C# and simplified UnityScript did not force you to strain yourself too much when scripting. The industry noticed the engine.
The turning point came in 2008, when Unity came out on Windows, increasing its audience tenfold. But the real takeoff happened two years later, after the launch of the Unity Asset Store — a marketplace for ready-made models, textures and scripts. And considering that most of the junk was priced at a buck, or even free — it became manna from heaven for indie studios and solo enthusiasts. At the same time, the engine added support for iOS and Android, which coincided with the boom of smartphones. Suddenly, anyone with a laptop and an idea could create a mobile craft. This is how Temple Run (2011) and Monument Valley (2014) appeared. A year later, in 2011, EA, Blizzard and Ubisoft paid attention to the engine, concluding long-term contracts for support and licensing (effectively buying the source code) for internal use. By 2015, nearly half of mobile developers were using Unity.
But under the hood, it was still a patchwork of libraries, three different build systems, a fleet of bicycles of everything, with partially usable EASTL, boost and the standard plus library, and different parts of the engine could use different STL, which led to the banal need to copy data, for example, between the renderer, which lived on EASTL and the core editor, which used custom classes and containers. Add here Mono nailed to the side, with which it was necessary to share data, and we get the classic architecture of a game engine of the early 2000s, so as not to be too “mean” - I will call such an architecture unitary (Unitary). In the English-speaking segment, there is a more accurate definition for such projects
Big Ball of Mud
Americans call this style of writing code without a clear structure the ” Big Ball of Mud ” - this is an anti-pattern described in 1997 by Brian Foote.
A “Big Ball of Mud” is a chaotic, unsystematic, hastily patched-together mass of incompatible components. Such systems show clear signs of uncontrolled growth and hasty fixes. Data is passed indiscriminately between remote system elements, often to the point that almost all important information is globalized or duplicated. The overall structure of the system may never be clearly defined. If it was, it may blur beyond recognition over time. Developers with even a modicum of architectural sense avoid such morasses. Only those who are indifferent to architecture and who are willing to put up with the daily grind of patching up chaos will agree to work on such systems.
— Brian Foote
In today’s reality, “unitary architecture” can describe, for example, a simple game where event handlers are directly connected to calls to the processing logic, without any internal buffering or dispatching. Most games start out this way, which eventually leads to unmanageability problems as they grow. For a game with three mechanics and a couple of screens, this is not a problem, but the lack of structure eventually makes changes more and more difficult, and the system itself suffers from problems with deployment, testability, scalability, performance.
This anti-pattern is common. Rarely does anyone plan to create a “unitary architecture”, but many projects slide into it due to the lack of control over the quality and structure of the code. In such a system, any change in one class leads to unpredictable side effects in others, turning improvements into a nightmare. In the worst case, by the end of the project, there will be a mess of source files, resources, and technical engine files. What began as a project of three students is now supported by a team of 400 engineers. Unity Technologies has 5,000 employees, and less than 10% of them are programmers. Below is a diagram of the parts of the Unity Engine, it looks decent and even not bad until you get into the engine’s positive guts and try to change something. And this is a diagram of class connectivity from the engine’s internal wiki, each chord is a connection between data from different classes and subsystems, which are shown in the picture above.
Unreal Engine
In 1991, Tim Sweeney, the founder of Epic Games, then just Epic MegaGames, began creating editing tools for his first games. It all started with an adventure puzzle game with very simple graphics called ZZT
It featured combat against various creatures and puzzles in top-down, maze-like levels. Using simple ASCII characters to render characters, enemies, and environments, ZZT ran in DOS text mode. At the time, the game wasn’t considered revolutionary in terms of graphics or gameplay, but Tim Sweeney’s approach to programming, and especially ZZT-OOP’s built-in level editor, established the principles of modularity that would later evolve into the Unreal Engine.
In 1992, Epic released Jill of the Jungle a platformer for DOS, which featured more advanced tools like sprite animation, motion physics, and particles. But the main thing was the emphasis on the component approach: Tim used data structures based on standard components, probably already in the engine, which allowed redefining the behavior of objects without rewriting the entire code. It was possible to create new types of enemies, change their AI, or add interactive elements to levels through scripts.
While most studios wrote code relatively “from scratch” for each project, Tim’s approach was to split the game into “engine” (low-level rendering systems, physics, sound) and “content” (resources and logic). This approach was later implemented in the Unreal Engine, announced in 1998 with the game Unreal, the very first game engine already consisted of independent components - the common part, the renderer, on top of this the UnrealEd level editor, and a separate scripting language UnrealScript - object-oriented, with class inheritance, which simplified the creation of mods.
Other features included dynamic collision physics, 16-bit color, dynamic lighting for up to three sources, and the ability to launch the game from a level editor. The game was a huge success, selling over 1.5 million copies. In 1999, Epic released its second game, Unreal Tournament, which was essentially a bug fix, but added in-engine network support.
The second version of the Unreal Engine debuted in 2002 with America’s Army, a free-to-play multiplayer shooter designed to boost the service’s popularity. It was the first time the Army had used large-scale gaming technology for anything other than internal use, and it won several awards, including “Best Use of Tax Money” from Computer Games Magazine and “Biggest Surprise of the Year” from IGN.
Layered architecture
Multi-tier architecture, also known as n-tier or component approach, is one of the most common ways to organize code in game development. Despite the fact that Unreal Engine was developed for a long time in the style of one specific person, it remained working for a large team, providing structure and scalability of the engine and its part. Look at the organization of connections between the engine components, yes, of course, there will be additional connections between different parts, but there are an order of magnitude fewer of them, and the community constantly fixes and corrects the identified errors.
And such a multi-level architecture naturally lay on the component system. Each component, be it Actor, Blueprint systems or C++ classes, form separate levels of abstraction, which becomes a de facto development standard and the basis of the game architecture for most game studios that use this engine.
- Game-Specific Subsystems
- Gameplay Foundations
- Rendering, Profiling & Debugging, Scene Graph/Culling, Visual Effects
- Skeletal Animation Collision & Physics, Animation
- AI, HID Audio, Input
- Resource Manager
- Core Systems
- Platform Independence Layer (Networking, File System)
- 3rd Party SDKs (DirectX, OpenGL, PhysX)
- Hardware
According to Conway’s Law, the structure of the code reflects the communication structure of the development team. The Unreal development team is divided into small groups: UI and user input, game logic programmers (who create gameplay systems), AI engineers who are responsible for the ability to write your future NPCs, objects and monsters, and a large team of engine optimization engineers. This organizational structure naturally carries over from the project level to the level of the team that makes the game itself. Individual developers can, if they wish, fulfill roles in different groups, or they can intern and move from group to group; this is the company’s policy.
But people are people, Unreal development is very relaxing, and sometimes teams slip into “default architecture”, when they simply follow engine templates and accepted practices without much thought about why it is all necessary. It is worse when a “unitary architecture” begins to appear in a project and the team simply “starts coding” without a clear plan. For some time, they will unconsciously implement a multi-level approach that the engine carries, but without the proper structure and support, everything will eventually come to a “Big Ball of Mud”.
As of 2023, Epic Games employs about 4,000 people worldwide. This includes teams involved in games, publishing, the store, and the engine itself. According to various estimates from open sources, from 500 to 1,000+ specialists work directly on the engine. But here the company includes, in addition to programmers involved in graphics, physics, and optimization (about 200 people), also technical support engineers who maintain repos on GitHub and accept new commits, tech writers and course engineers, platform developers (VR/AR, mobile) and a large department that develops the Virtual Production direction, CAD integration, and the wishes of large gaming and film studios (about 400 more people)
If you are interested in reading about game engine architectures, I recommend paying attention to the series of articles about the Quake engine.
https://fabiensanglard.net/quake2/quake2_software_renderer.php
https://fabiensanglard.net/quake2/quake2Polymorphism.php
https://fabiensanglard.net/quake2/quake2_software_renderer.php
https://fabiensanglard.net/quake2/quake2_opengl_renderer.php
Microkernel architecture
In the early 2000s, when developers were looking for ways to create increasingly realistic and immersive game worlds, a unique game engine with a fundamentally new approach to architecture appeared on the scene. CryEngine, developed by the German studio Crytek, was revolutionary not only due to its stunning graphics, but also due to its innovative microkernel architecture.
It all started with a small technology demonstration called “X-Isle: Dinosaur Island”
The tech demo created by the three Yerli brothers (Cevat, Avni and Faruk Yerli) created a sensation among publishers and developers. At the time, most game engines were based on a monolithic architecture, where components were tightly coupled, but the brothers had a different vision. Inspired by QNX concepts, they transferred the principles of microkernel architecture to a game engine, laying the foundation for what would later become one of the most technologically advanced game engines of its time.
Investors and studios believed in this approach, when CryEngine allocates only critical functions to the central core, and the remaining components work as separate modules. This modularity allowed replacing components without risking the stability of the entire system, and at some point there were implementations that allowed the runtime to overload individual dlls directly during the game’s operation, which allowed, for example, updating the behavior of NPCs without pausing the game, fixing logical bugs without restarting, and only very critical errors led to a crash. But even in this case, there was a fallback to the standard module, which allowed reconnecting the crashed component and continuing to play. In addition, this approach allows developing new functions in parallel, debugging and optimizing subsystems without the need to rebuild the entire engine. The brothers promoted the idea of creating games of various genres on one technological base, but something did not work out and the engine remained “the best engine for FPS”.
The first commercial game to demonstrate the power of the microkernel approach was Far Cry (2004), which impressed with its huge open spaces, long draw distances, realistic vegetation, and a world that could dynamically adapt to the player’s actions. The second version of the CryEngine, used in Crysis (2007), became the de facto technological benchmark for new generations of video cards, setting the bar for graphics performance, and the phrase “But can it run Crysis?” became a meme among gamers.
The microkernel approach of a specific engine influenced the entire game industry, bringing such concepts as scalability, fault tolerance, adaptability of transfer to new platforms and architectures. It also influenced other engines, which began to adopt individual approaches for development. Actually, after the release of the second crisis, scalability and adaptability became part of the usual practices in the development of game engines in general.
Modern CryEngine continues to develop these ideas, but moving more towards the increasing use of artificial intelligence for the toolkit, and of course graphics. Nevertheless, the microkernel philosophy laid down by the Yerli brothers influenced the gaming industry, showing that modularity, flexibility and scalability can go hand in hand with high performance.
Now the engine has actually switched to the open source model https://github.com/o3de/o3de , which is an evolution of the Amazon Lumberyard engine, which in turn was based on CryEngine 2015. Amazon bought the CryEngine source code, refined the source code and posted it on GitHub first as licensed Lumberyard source code, and later gave it to the community in the form of O3DE for free. About $50 million had been poured into the engine at the time of its release into open source, which by some estimates will be even more than Unity/Unreal, or at least comparable to the cost of their development.
The micromodular architecture is now represented as independent components (gems) that can be arbitrarily added or removed from a project. Amazon and later the community added a full-featured editor with visual tools that requires almost no programming if you use only the editor tools, and a scripting system (LUA and Python) for creating game logic without having to deal with guts and C++ code. Today, O3DE is still in an active development phase. Although the engine is very functional, it is not as widely used as Unreal or Unity.
Dagor
Among the famous game engines, mostly American-based, there are nevertheless developments from our guys, which have had a significant impact on the development of the industry. One of these technologies was Dagor Engine , developed by Gaijin Entertainment. This engine is notable not only for its technical capabilities and the ability to run on any hardware (there was news somewhere that it was launched on Elbrus), but also for its unique approach to architecture based on data-driven design principles. The engine is now open source, so you can see for yourself what’s what ( https://github.com/GaijinEntertainment/DagorEngine )
The engine began its history in the early 2000s as an internal tool at Litki for developing its own games . It became famous after the release of the avisimulator “IL-2 Sturmovik: Winged Prey” (2009), where it demonstrated the ability to process complex physical models and create realistic visual effects, and also showed the possibilities of data-driven approaches.
Data-driven architecture
Data-driven architecture is an approach to software development in which the application logic is determined primarily by data, rather than hard-coded. In the context of game engines, this means that most of the game world, objects, and their behavior are described in data files (often in JSON, XML, or other structured formats) that are interpreted by the engine at runtime, which brings with it certain advantages: separation of data and code, declarative definition (objects, their properties, and interactions are defined in a declarative style, describing “what” to do, rather than “how” to do it), dynamic configuration (changes are made without the need to recompile the code or even restart the game).
All these properties naturally grow into a component system, which is defined and configured through data files. If in traditional engines changing the behavior of game objects often requires writing or modifying code, compiling it and then testing it, then here game designers and artists can modify the parameters of objects, effects and even basic behavior by editing data files, as they say, on the fly. The example is, of course, so-so, but I myself saw how at an internal tournament on Cuisine Royale, the tournament admins spawned random level objects during the game, simply throwing blk files (analogous to json) into the folder with the level, as a result, cars, weapons, refrigerators and a couple of tanks flew out of the sky to the people. Data-driven architecture naturally supports the component approach to designing game objects, and over time you yourself begin to think in this paradigm and cannot imagine how it was possible to work differently.
Each object in the Dagor Engine can be composed of a set of components, each responsible for a specific aspect of functionality. This approach makes it easy to create new variants of objects by combining and customizing existing components, and again, all of this is done on the fly without recompiling the engine and often without even restarting the game.
Contrary to popular belief that interpreting data at runtime slows down the game, a proper implementation of data-driven architecture can provide high performance. Indirectly, but you can judge this by the ability to run a game like Tundra with acceptable fps on the Nintendo Switch, and this is far from the fastest hardware.
One of the cornerstones of data-driven architecture is the resource system. All game data is organized as a hierarchical structure of configs, each of which has a unique identifier and can be loaded on demand. Using the Dagor Engine as an example, it might look like this:
/data/
/vehicles/
/tanks/
/t-34/
model.blk
textures/
diffuse.dds
normal.dds
specular.dds
weapons.blk
collision.blk
damage_model.blk
/weapons/
/guns/
/85mm_zis/
ballistics.blk
visual_effects.blk
Files .blkare a special structured data format used in Dagor, which is similar in functionality to JSON, but is optimized for fast loading, processing and the ability to overload sections and properties. When loading, the game can overload the properties of an object from a versioned config, it will look something like this, and the final config will berendinstDistMul = 0.8
visual_effects.blk
```
graphics{
enableSuspensionAnimation:b=no
rendinstDistMul:r=0.5
grassRadiusMul:r=0.1
}
```
visual_effects.@1.blk
```
graphics{
override@rendinstDistMul:r=0.8
}
```
Despite all its advantages, data-driven architecture has significant disadvantages. The first is the difficulty of debugging - since the behavior is determined by data, not code, debugging becomes very labor-intensive, requiring specialized tools to track how changes in the data affect the behavior of the system, and often leads to parallel support for visual debugging tools specific to a particular engine. The second is the performance of interpretation - although the engine can be optimized to process data efficiently, data interpretation still adds overhead compared to hard-coded logic.
X-Ray Engine and Monolithic Architecture
There are many technologies in the history of the gaming industry that were ahead of their time. One of them is the X-Ray game engine, created by programmers Alex Maksimchuk and Oles Shishkovets for the STALKER series. Although the game was originally supposed to be about robots on an unknown planet with an anomaly zone and lasers, the guys then realized that it was not necessary to invent a planet and robots, and the zone with anomalies and the right atmosphere was a three-hour drive from the office. First demonstrated back in 2001, this engine became the technological basis for one of the most atmospheric gaming universes in the gaming industry.
The graphics part of the engine was impressive at the time of its release, high detail — up to 4 million polygons per frame, which was much higher than the performance of games even in the late 2000s, large-scale spaces — the engine worked equally effectively with both closed spaces and open areas up to 2 square kilometers, dynamic change of time of day — a full day-night cycle with corresponding changes in lighting and weather effects, such as realistic simulation of rain, wind and fog. The dynamic lighting system deserves special attention, which even today produces memorable shots and creates an unforgettable atmosphere.
For physical simulation, X-Ray used the free Open Dynamics Engine (ODE), released in 2001. This open-source library provided a rigid body dynamics system and a collision detection system, suitable for simulating vehicles, creatures and objects in a changeable world. Theoretically, due to the high stability of integration, the system should not “explode” without reason. However, players of the first stalker are well acquainted with numerous physical anomalies - from flying bodies to strange behavior of objects, which has become a kind of “feature” of the series, generating memes and funny videos, but let it be a feature of the zone, anomalies after all.
The engine is a classic example of a monolithic structure, not to be confused with a unitary one, divided into parts, the connections between which are well minimized. Unlike other solutions, where the systems work relatively independently, X-Ray is a tightly integrated system, where all components are inextricably linked.
Monolithic architecture has both certain advantages and serious disadvantages. Close integration between systems (graphics, physics, AI) and the ability to create gameplay mechanics based on component connections allow for more efficient use and planning of resources through the use of custom allocators, object packing, fast message queues, etc. Direct interaction between components without additional layers of abstraction ensured maximum speed on the hardware of that time. Memory and processor time can be precisely managed, which was critical for such a demanding project.
The unified architecture allowed us to implement a holistic vision of the engine, game world, tools and subsystems, where everything interacts naturally. Simply put, it is an approach to creating software in which the various components of the system are closely interconnected and function as a single whole, and it is impossible to remove or isolate a separate part without losing functionality and reducing the speed of operation.
But what is a strong point in terms of perf also limits scalability, adds complexity to updating individual components, causes certain problems, especially on multi-core systems. Well, the complexity of developing such a system and fixing errors in it grows exponentially, since changing one part entails changes in its neighbors.
In fact, the complexity became the main technical problem of X-Ray, in addition to the random crashes known to all fans of the series (hello “green beetle”), another problem was microfreezes and stutters observed in all games of the series. They were especially noticeable in “Shadows of Chernobyl” and the popular modification Misery. In extreme cases, the game could turn into a slide show even at a normal frame rate. The root of the problem was in the architecture of the engine, which relied on the resources of only one core, and this limitation became increasingly critical with the advent of multi-core systems.
Officially, the latest version of the engine is X-Ray Engine 1.6.02, used in Call of Pripyat. However, the dedicated fan community did not stop there. Enthusiasts took up the task of refining the technology, creating unofficial versions of the engine that fixed many critical errors, added new features and, most importantly, implemented support for multi-core processors and multithreading.
Despite the technical problems, the engine remains a significant milestone in the history of game development. Its strengths - dynamic lighting, sophisticated physics, the A-Life life simulation system - were ahead of their time and brought many wow moments to fans, and perhaps this monolith has finally fulfilled someone’s wish for a dream game.
The monolithic architecture with all its advantages and disadvantages, and most importantly its meaningful application, became a good example of a certain approach to the development of game engines, which is also a valuable lesson for the industry. Later, Alex and Oles moved to another studio, but the concept of the engine cannot change overnight, and neither can experience and established practices, so they received their development in another series of games about the metro.
Godot and micromodules
From the outside, it may seem that game development is developing at a rapid pace: new approaches to rendering, the use of neural networks for animations, model formation and voice acting, but inside it relies on tried and tested solutions that are very, very difficult to break or change. I would even say that game development outside of conferences is very conservative, large studios do not want to take risks, and small ones simply do not have the means and time for this. Something truly new in basic solutions appears quite rarely. Against this background, it becomes especially interesting to drag concepts from the web world, as happened with the Godot engine and microservice architecture .
Most architectures get their names after the fact from the community or conferences where they spot a pattern and start building on it - there’s no secret group of architects deciding what the next big movement is. Rather, it seems like many developers end up with similar solutions as the engine ecosystem changes and evolves. The best ways to deal with those changes and benefit from them become the architectures that others emulate. Microservices are different in this regard, there was no engine or game that used the principles, and the term itself was named and popularized by a blog post by Martin Fowler and James Lewis called “Microservices” in 2014 where they outlined the common characteristics of this relatively new architecture, and I see a lot of the ideas from there being reused in Godot. It’s all about breaking down a complex system into many small, autonomous services, each responsible for a specific function.
In the context of Godot development, this concept translates into a “micromodular architecture,” where the game is not a monolithic colossus of code, but a system of interacting components. Physics, artificial intelligence, the user interface, the audio system—each of these elements can be implemented as a separate module with a clearly defined interaction interface. I wouldn’t say that this approach seems ideal to me, but at least it allows you to focus on individual aspects of the game without worrying about how changes will affect other parts of the project.
The engine appeared on the gaming scene in 2014 and has since won the hearts of many developers for its simplicity and reliability. Named after the famous character from Samuel Beckett’s play Waiting for Godot, Godot’s uniqueness lies in its approach to organizing game logic through a system of nodes and scenes. Imagine a constructor where each element of the game - from the character to the interface - is a separate block that can be easily connected to others.
Although Godot was not originally designed with a microservices architecture in mind, its node system and modularity fit well with microservices principles. This all resulted in a micro-modular architecture, where different components can be developed completely independently of each other.
Godot’s signaling mechanism provides a rather interesting way of connecting components. Imagine that the player has picked up an item - the inventory system sends a signal to which other systems can respond: the interface updates the display, the achievement system processes the progress of the picked up items, and the perk system recalculates the character’s stats, and all this despite the fact that none of these systems knows about the implementation details of the others, or perhaps even about their existence.
Actually, the main idea of the engine is that new functions can be added by creating new modules, rather than changing existing ones. Component modules can be reused in other projects, and if a module contains an error, it is less likely to affect the operation of the entire game, it will affect of course, the game cannot be super distributed, but at least it will crash less often.
But we must remember that such architecture is not a panacea either. For small projects, such an approach is ideal, you throw in modules-components, set up signals and voila - everything works in the best possible way. As the project grows, connections and mechanics begin to acquire excessive complexity and become unproductive, the signal system becomes a bottleneck, and modules begin to conflict over updates. And here we must understand that the use of micromodules is justified where they really bring benefits. For example, separating game logic from visual representation is almost always justified, while dividing closely related mechanics into separate parts-services-modules will come back later with unnecessary complexity.
Well, and besides, unlike traditional microservices in web development, where they work as separate processes or even on different machines, in the game all components function within one (several) process. This eliminates the need to solve problems of interaction and synchronization, which are typical for classic microservice systems and interfere more at such “short” distances.
Nevertheless, this approach finds its fans, in 2024 more than a dozen relatively large games were released on this engine. Of course, this is not thousands like Unreal/Unka, but a serious achievement for a project that is actually run by five people and a community.
Which is better?
I have described only those engines that I have worked with in practice for a long time, but there are many more, and each will have something special. I wish I knew which one is better, but I don’t have an answer! With so many options available, which one to choose - I don’t know. It depends on many factors within the project and on what kind of games you are developing, the experience and people in the team. Despite everything that has been said before, there is one rule that applies to any project and any architecture: make a game - a released game on a unitary architecture will be a hundred times better than an unreleased one, no matter what architecture it was made on.