Performance

Software development evolved into more dynamic language oriented in the recent 10 years. For most of the stuff, interpreted languages provided enough performance because they were either used for IO bound applications like web or OS API bound applications like desktop. But the game experience is effected by performance directly, I am not just talking about visuals or FPS, but very tiny amount of lag in the controller can ruin the gaming experience severely.

Avoiding stutter and lag

All dynamic languages have a form of garbage collector, which manages your memory. Which ultimately means not just both allocation and deallocation have higher overhead, but the garbage collection process is unpredictible, you experience sudden lag, shutter in your game. Amount of this time become less noticable visually, however input lag is still there, and defines how fluent your game is. The worse part is it is unpredictible, constant lag of 10ms is better than having random 30ms, randomness distrupts the fluency of the game. You may try to lessen this effect by using allocation pools, and not creating object or releasing them, but by doing that you either sacrifice memory or the variety of your objects in your game, which means more generic approaches, which means more glue code, or branching. This is not a battle you can just win as long as GC is there.

Not being inside a sandbox

Dynamic languages provide a controlled environment and limits your access to the OS API. Which does not just takes away significant portion of your developer powers, but also limits the amount of optimizations you can come up with. You cannot possibly implement alternatives as fast as engine level components. That means your games will look quite similar to others using the same engine.


Comments

comments powered by Disqus