Recently I started reading a book about code optimisation that focuses on the x86 architecture and assembly language. The book is called Graphics Programming Black book and is written by Michael Abrash who is well known for his work writing technical articles and coding on games such as Doom and Quake. Whilst the book has a graphics focus, the skills you can learn from the book can be applied to any high level language or assembly language you may happen to be using. I’ve dabbled with assembly code in the past and have found some parts of the book quite interesting. You can find the book on-line to read via html or PDF download. You can find it pretty easily if you just google Michael Abrash and the name of the book.
After my recent testing of data structure performance and beginning to read this book, I decided that I should look over all the Java code I had in order to try and optimise it. I have been for some time working on writing some android games, and in the process have built for myself a nice graphics library and the basics of some games. In testing and tuning my code I’ve found some good rules of thumb for making fast code in Java not just for the android platform but in general.
The first thing I’d recommend you do is to not use Iterator classes anywhere in your code, but use for loops instead. Iterators are objects and add overhead to your applications in multiple ways. Firstly the occupy heap space and need to be garbage collected when finished with. Now this may not seem all that bad, but if you have several nested loops, this could mean thousands of Iterator objects being created and destroyed on a regular basis in your code. Secondly it adds extra method call overhead every time you call hasNext() or next(), slowing down your loops. Again for small numbers of loops it seems insignificant, but when added up over a large number of loops the time taken to make the calls can become quite significant. For these reasons, even when using the API data structures such as ArrayList, I would avoid using Iterators.
In very time critical code, it is best to build your own data structures out of primitive arrays rather than to rely on the API code. The JIT compiler seems quite adept and optimising simple array based code to run a lot faster than pretty much any API or object based solution. It’s also best to try and minimise the number of method calls in your important loops to keep overhead down.
An interesting problem I recently had was to do with path finding within a maze in one of the games I am writing. Obviously I had a randomly generated maze that whilst being perfect to start with, has cycles (or loops) introduced to it to make the game more playable. When I first wrote the path finder I used a depth first approach that works quite well on perfect mazes as there is only one path from each point to each other point. The problem I had with it was two fold, the algorithm was slow in the larger mazes and with cycles being introduced it often chose longer paths than it had to. This perplexed me no end, so I went in search of what the problem might be, optimising some of the code to get better speed, and writing some extra code to try and get a better path. After having tried without any luck I decided to try using the breadth first search algorithm implemented iteratively. It turned out to be a whole bunch faster! I investigated into why this might happen given that depth first search should be faster to find a path (even if it’s not optimal). Upon comparing the code of the two algorithms I came to a simple conclusion. The depth first search code took a lot longer to execute a single iteration because it had many more method calls and comparisons than the simple breadth first search does. So whilst on paper the depth first search looked good, its performance is poor in comparison, because the simpler breadth first search could get through more iterations in the same amount of time. The breadth first search is so much better I was able to generate larger mazes and still find paths within a reasonable response time, even with worst case paths that searched most of the maze!
It’s interesting how sometimes the unexpected solution turns out to be the best. It shows you should always test your code, and spend a bit of time optimising your critical code. It will in the end make the experience of the end user all that much better, and enable you to do more with less.
Recent Comments