Archive for May, 2014

26
May
14

Pango for DOS

Pango

Pango

Pango is an extremely old game for the original IBM PC written back in 1983 by Sheng-Chung Lui. It is a port of the Sega published arcade game Pengo that was made the year before. Instead of being a penguin like in the arcade game, you are a green blob. Your job is to hunt down and capture or squish the bees (yellow blobs) that are roaming around the level. You do this by kicking blocks at them or stunning them by kicking the wall when they are adjacent to it.

Game Elements

Game Elements

Being for the oldest type of PC the graphics support is only CGA and sound comes from the PC speaker. Both are ok for the time but not as nice as more polished games like Alley Cat. It was rather difficult getting Dosbox set up exactly right for this game as it doesn’t have nice timing code. I had to set the cycles down to 250 (you can change this up/down to suite you) and the machine type to CGA to get reasonable results. Other websites suggest about 500 cycles which is actually much faster than the normal 4.77Mhz PC which makes the game run too fast. I’d say the best way to play it would probably be on the original hardware, which I unfortunately don’t have.

First Novice level

First Novice level

Once I got it working I found it was pretty fun, although the control scheme had me confused and getting killed frequently. It is quite sensitive to how many cycles dosbox is set to, more cycles can make it quite difficult to stop where you want to. Once I got the right number of cycles with the aid of a DOS based benchmark utility it got much easier to play.  Then after I had some practise the game became much more fun.

In summary I think Pango is probably worth a look if you happen to own and old IBM PC, or are just curious about games from that era. Being from such an old machine means that the graphics and sound aren’t all that impressive, so fans of the arcade game will probably wish to emulate that instead.

23
May
14

Downloads

Today I’ve added a page where I’ll be linking in relevant downloads for stuff I’ve posted. This will most likely simply consist of java/pascal/basic code and perhaps the odd binary if I feel like it. I’ve only got one available at the moment, an improved version of the code used for my recent post about recursive and iterative procedures. I managed to get the iterative depth first search to behave the same as the recursive one, this resulted in an impressive performance increase. Now the iterative function is in general faster and occasionally is slightly slower (the reverse of what happened before). This is the result I had expected when I first coded the test, I’m still waiting on the larger scale tests which are now running the improved code.

I will gradually add more stuff for download, nothing large as I’m using my own connection for hosting. I’ll add various bits of code I’ve mentioned in the past, my games, and maybe a few things that can’t be found easily on the web anymore provided they are freeware.

21
May
14

Recursive and Iterative procedures

Today I’m looking at two different types of procedures, Recursive and Iterative. Many problems can be implemented using both techniques which have both benefits and drawbacks.

For those not acquainted with the terminology an iterative procedure accomplishes computation with a block of code that repeats many times until a terminating condition is met. The repeating block of code is commonly called a loop and can take many different forms. One of the main drawbacks of an iterative design is that it can become quite complicated making it harder to understand its function. However this is a trade-off to get higher performance and to keep the size of the program stack small.

Recursive procedures however are quite different as they infrequently make use of loops. Instead they achieve repetition by invoking itself. The Fibonacci function is a classic case. Fib(x) = Fib(x-1) + Fib(x-2) where the first two entries are hard coded to 1. It’s important that recursive functions have a basis case or a maximum depth to the number of calls otherwise an error called a stack overflow can occur. Basically a stack overflow is simply when the program stack literally overflows with too much data. The program stack has to store the local variables and return addresses for completed functions, recursive functions typically use up a lot of stack space. Another disadvantage is that function calls in many modern OO langauges are relatively expensive in terms of time required, as some languages require a pointer or two be dereferenced to access the function.

Fortunately all recursive procedures can be converted into iterative ones, the Fibonacci function again a classic and trivial case. The code might look something like this…

public int fib(int z)
{
  int[] series = new int[z];
  series[0] = 1;
  series[1] = 1;
  for (int i=2; i<z; i++)
    series[i] = series[i-1] + series[i-2];
  return series[z-1]; // end of the array contains the result.
}

Normally I’d say Iterative functions are usually faster, and in the case of the Fibonacci function that can be demonstrated, but with more complicated problems and algorithms it is much harder to prove. It’s simpler to construct the code and measure the performance! So to get an idea of the impact of function call overhead I decided to implement a problem in Java using a recursive and iterative algorithm.

The problem I have chosen is finding a path within a maze. I chose it because I already had some maze generation code and the depth first search algorithm should have similar performance when implemented both ways. The maze generated is supposed to have loops in it (ie it is not perfect) so I had to code the two procedures carefully to ensure they would pick the same path. I included a breadth first search algorithm (implemented iteratively) as well for comparison, it is guaranteed to find the shortest path where as both the depth first search implementations are not.

The testing procedure involved generating a maze and determining the rate at which each algorithm could compute paths in the maze. The start and end points were set randomly but the random number generator seed was reset to be the same at the start of each algorithms run. This means that they should have to compute the same paths. I set the program to repeat this process 10 times to get some results for today, but I have a more powerful machine repeating the process 1000 times, just I have to wait several more days for the results.

In implementing depth first search in both techniques there was a minor difference in the algorithm that I couldn’t avoid at the time. The recursive algorithm is smarter than the iterative one in a subtle way. When moving back up the tree after hitting a dead end the recursive algorithm intrinsically remembers which directions it has tried at nodes it returns to. The iterative algorithm isn’t as clever, it checks all the directions again (in the same order). This should make the iterative algorithm slower as it does more processing when moving back up the maze after hitting a dead end. How much slower? I don’t know, but hopefully not much.

This slideshow requires JavaScript.

The results are interesting but unfortunately not definitive. You can see that the two depth first search algorithms are fairly close in terms of rates, although the recursive algorithm tends to be faster until the maze size gets larger. I have tried it a number of times to see how consistent the results are. I found that the breadth first search generally gives fairly consistent results, but the two depth first search algorithms can vary depending on the mazes generated. More often the recursive algorithm is faster but in some data sets the iterative one is faster.

I did some other small scale tests and it seems the processor has an effect on the results. I’m guessing the JIT compiler has different optimisations for different processors which is changing the results. It is entirely possible that the JIT is optimising out the expense in using recursive calls although I think this unlikely.

This leads me to believe I need more data, so I’ll wait until I get results from the machine testing 1000 different mazes. I did these smaller tests on my Macbook and was unable to test with mazes larger than 512×512 because of a stack overflow. The machine I’m doing the larger tests on is managing to run at a maze size of 1024×1024 because it has a different java run-time. (running under Linux) I might have a go at making the Iterative depth first search function more like the recursive function in terms of smarts but we’ll see if the larger tests reveal anything more.

13
May
14

God of Thunder for DOS

God of Thunder

God of Thunder

God of Thunder was designed by Ron Davis and released by Software Creations (which became Impulse games at one point) in 1993. It is a top down adventure/action/puzzle game in which you play Thor, the god of thunder. Your father Odin has asked you to free the realm of midgard from Loki and his allies. They had captured the land during the last Odinsleep. You are given the magic hammer Mjolnir to liberate the lands for Odin.

The story

The story

It is an interesting combination of genres as at times it is like an adventure, action and puzzle game. The adventure aspect comes from collection items and talking to many NPCs for information and items. They don’t officially send you out on quests, but they will sometimes ask for assistance or just complain about something you can fix. It’s a good idea to talk to everyone as sometimes what they tell you now will be helpful later.

Introductory puzzle

Introductory puzzle

The action segments consist of you dodging enemies and their projectiles whilst throwing your hammer at them to destroy them. This can be quite tricky when there are many enemies on the screen. Everything moves quite quickly (even on a slow machine) and the projectiles are usually quite accurate so you have your work cut out for you. Fortunately magic items you pick up during your adventure may help with these sections.

Spiders can shoot?

Spiders can shoot?

Puzzle elements in the game include pushing blocks around to block shots from some creatures that you cannot kill. There are also some switch puzzles, which you can use your hammer to activate remotely if needed. I haven’t run into anything too difficult yet, but I’ve not had enough time to play through the whole game yet.

Bad apple

Bad apple

One of the first things I noticed when starting up the game was the high quality of the graphics and animation in game. VGA is the only type supported as you would expect, and the game seems to run well even on the equivalent of a slow 386 machine. I can understand the choice as translating graphics this detailed to a lesser mode like EGA or CGA can be very difficult.

Magic apple

Magic apple

The music is of a similarly high quality. The tunes are quite catchy and enhance the mood of the game. There is different music depending on where and what is happening on screen. The game has digitised sound effects for all the usual events such as being hurt, etc. The sound quality was quite good, and I felt the sound effects fit the game quite well.

Entering the village

Entering the village

Because I’ve not had tonnes of time to play it prior to today, I’ve not had enough time to finish the first episode even after many hours of play. I’ve enjoyed everything so far, so I don’t mind investing the time I have. There is humour sprinkled about that has kept me amused during the time spent in the village. I even ran into a rock troll that demanded a shrub before he would let me past! (Ni!) I found the trickier puzzles and combat sections forgiving as there are unlimited lives, and upon death you simply restart the screen with the resources and score you had when you entered.

Shrubs are valuable?

Shrubs are valuable?

God of Thunder isn’t a well known game, possibly because it wasn’t funded as well as it could have been (according to Adam Pederson’s website anyway). I’ve enjoyed playing and I think it’s probably one of the better crafted DOS games from the era. It is unique in its presentation and gameplay and is certainly worth a look especially as it has been freeware for quite some time.

This slideshow requires JavaScript.

 

04
May
14

Ethernet Hubs

Today switched ethernet networks are the norm, but before the price of switches decreased hubs were the main means of linking many nodes together. I have two old school hubs in my collection, one from my early home network and one I rescued from scrap during my time working in IT support.

The main difference between a hub and a switch is basically how smart they are. A hub is an inherently dumb device that distributes packets of data to every listening node. About the only function it does perform is checking for packet collisions on the network. Collisions happen basically when two nodes (any device using the Ethernet) try to talk on the same link at the same time. This corrupts the data being sent and requires the nodes to resend their data. Because a hub effectively links all the nodes together on one link collisions can be a serious issue. Typically the data rates for hubs are 10Mbit/s as they were displaced by switching technology as speeds increased.

Other forms of Ethernet such as the coaxial type that was commonly found had one link as well, but had less collision problems. This was because the node could look at the cable directly and see if anyone was using it and then wait until it was free to transmit. It is still possible to get collisions, but less likely and usually improved the throughput.

Switches are a completely different beast, they are effectively a node within the network with a multitude of links coming out. A switch has a CPU that basically inspects each packet and sends it only to the ethernet port that it is destined for. Because each node connected to the switch has its own and separate link the possibility of collisions is either significantly reduced or eliminated depending on whether the link is full-duplex or half. With every node talking all at once a switch has to be very good at inspecting and routing packets quickly, so that is usually all done with specialised hardware designed to do it at ethernet speeds. Of course it can be overloaded and depending on the hardware this causes something called packet loss. Switches can have data rates up to 1Gbit/s at the moment.

I only have one switch that isn’t so interesting to look at and is currently in service. So I’ll only show the hubs I own.

Continue reading ‘Ethernet Hubs’




Enter your email address to follow this blog and receive notifications of new posts by email.


Mister G Kids

A daily comic about real stuff little kids say in school. By Matt Gajdoš

Random Battles: my life long level grind

completing every RPG, ever.

Gough's Tech Zone

Reversing the mindless enslavement of humans by technology.

Retrocosm's Vintage Computing, Tech & Scale RC Blog

Random mutterings on retro computing, old technology, some new, plus radio controlled scale modelling.

ancientelectronics

retro computing and gaming plus a little more

Retrocomputing with 90's SPARC

21st-Century computing, the hard way