23
May
18

Freddy’s Rescue Round-up for DOS

Today’s game is an early CGA game originally made for IBM in 1984 by D.P. Leabo and A.V. Strietzel. It was included on software sampler disks that came with many IBM PCs. You play as Freddy who has to rescue all the road runners before the maintenance bots gone rogue harm them. It’s a little bit like Lode Runner, but has a number features that make it different. I saw LGR playing it on a video where he was unboxing a NOS IBM PC and thought it looked interesting.

Being an early IBM PC game, the only graphics supported are CGA, primarily as the other standards hadn’t arisen yet. It runs on the slowest of IBM machines, so there is no scrolling and each room is the size of a screen. Performance on an old 4.77Mhz machine should be quite reasonable with perhaps a little graphical flicker. The game timing works independently of the CPU, so faster machines can play with out issue. Artistically the graphics are quite well drawn for CGA, although you will notice everything is generally a combination of two colours in stripes. This was for use with composite monitors that were capable of showing 16 colours. I can’t show what it would have looked like because dosbox doesn’t display this particular program in its composite emulation mode. PC speaker sound is used for similar reasons, there just wasn’t anything else at the time. The short snippets of music and sound effects are surprisingly quite charming, and suite the game quite well.

The game play has some common ground with Lode Runner, you have to collect the road runners rather than gold and the levels consist mostly of platforms and ladders. There is a time limit for each screen, and you can burn holes in some floors in much the same way, but the enemies (maintenance robots) don’t fall in, they stop and wait for the floor to reappear. On the other hand there are some significant differences. The robots are much less aggressive in their pursuit, and move significantly slower. The levels are larger than a single screen and you use doors to travel. White doors teleport you to the other white door on the screen and are an excellent way of avoiding being caught. Magenta doors travel to other screens within the level, once you collect all the road runners on a screen a second magenta door appears. You only finish a level once all the screens are cleared of road runners and the power-ups that freeze the robots.

When it came to the game controls I was quite lost at first, as there is basically no documentation with the game telling you how to play. I managed to work out basic movement fairly quickly, as they are just the arrow keys, but it took some time to find out how to jump and make holes in the floor. This left me puzzled as there were road runners I couldn’t reach without using these features. You jump by pressing the space bar and left or right, which will jump over a one tile gap. Pressing space bar on its own will dig a hole in the floor in front of Freddy, as long as it’s a floor where that is possible. Once I learned the controls they worked quite well, just the lack of documentation made it hard. The other main issue is that Freddy basically only moves in whole tile increments. If you release the key whilst he is half way between tiles he will keep going until completely on the next tile. This only really caught me out at the edges of platforms as I’d overshoot and fall off the edge.

The level design is generally fairly good, there aren’t many areas where you can get trapped by a single bot. Although if you set the difficulty level to normal or hard there are more bots chasing you which is significantly harder. The bots behave differently to the bad guys in Lode Runner in a way which makes it harder. They spread out and cover a larger area of the screen. Where the Lode Runner bad guys can be bunched together with some clever movement, effectively making them easier to avoid. Luckily you have a couple of tools in avoiding the bots, such as digging holes, using doors (when you can reach them), and the dots that freeze the bots.

I’d say Freddy’s Rescue Roundup is a bit of a hidden gem despite IBM making it public domain and the fact it was distributed with IBM PCs. Most of the usual places I look for DOS games didn’t have it, but it can still be found on some abandonware sites . It could be because of its age, it’s not as well remembered, either way it’s certainly interesting and still quite fun to play. If you happen to own an old PC with CGA and possibly a composite monitor this is worth giving a go.

This slideshow requires JavaScript.

Advertisements
16
Apr
18

SS20 Desktop: Renewed Vigour

Last time I had started to finally get to grips with the system hanging issues, having found out much of the problem came down to the SMP kernel issues related to the on-board SCSI that are still prevalent within NetBSD releases. I was fortunate that I’d been given a chance to try out a patch that made SMP much more stable (although not perfect). This gave me essentially 4 different configuration options. After thinking about it, I decided it would probably be prudent to make some measurements to hopefully determine what the best way to go is.

I have three Mbus modules (pictured above), a dual CPU SuperSparc @ 50Mhz, a single CPU SuperSparc @60Mhz and a single CPU HyperSparc @ 90Mhz. The clock speeds can be a little misleading as there is a little more to each module. The SuperSparc modules each come with 1Mb per CPU of cache where as the HyperSparc has only 256Kb, and the dual CPU module runs on a slower Mbus @40Mhz whilst the other two run at 50Mhz. Additionally the rough guide to Mbus modules, an essential site for anyone with a sun machine like mine, suggested that the SuperSparc CPUs would actually perform better on a per clock basis. Given all this it’s not really clear which the best performers will be. From here on in I’ll abreviate SuperSparc to SS and HyperSparc to HS

Today we’re going to look at the results of some of the intensive benchmarks I’ve put the modules through, and at the end the best choice of configuration given the hardware I have on hand. All the tests are run with the same OS (NetBSD 7.1) and hardware with the exception of the Mbus modules under test.

The first set of benchmarks are aimed at measuring basic CPU speed. The benchmarks I’ve used are Dhrystone (version 2.1), Whetstone and both the double and single precision versions of the linpack benchmark. These tests are measuring single threaded performance of the modules.

Just looking at these charts it’s obvious that the HS is the fastest of the three modules. Given its higher clock speed that is to be expected, but it also attained higher scores per clock for all the tests except whetstone. The linpack tests show a large difference with the HS running about 12% faster per clock for double precision and about 22% faster per clock for the single. The Dhrystone test showed a much more subdued advantage, only running about 7% faster per clock. The Whetstone test showed the HS was slower, doing floating point arithmetic by about 11% slower per clock cycle.

Both SS modules performed about the same relative to their clock rate, which indicates the Mbus speed wasn’t a large factor in these tests, and that the data size was likely smaller than that of the L2 cache (1Mb). I would have expected the dual 50Mhz module to be slower in single threaded tasks as the Mbus is slowed to 40Mhz (as opposed to 50Mhz the others use).

I’m not sure how I feel about the results here, the data set size for the tests was almost certainly too small to even exceed the capacity of the HSs 256Kb cache. I’m not sure what to make of the linpack results, but the dhrystone and whetstone results seem to indicate the HS core is better at integer and string operations and the SS core is better at floating point.

I selected the next benchmark because it offered speed measurements over a range of data sizes. The Sieve of Eratosthenes is a simple algorithm for finding prime numbers within a finite numerical space. Rather than explain it myself look here on Wikipedia for more details. One of it’s key features is that it is quite hard on a CPU’s memory bandwidth, and it’s use of the cache is quite sub-optimal. I omitted testing the 50Mhz SS module.

The results are quite interesting. The HS enjoys an advantage of about 14% per clock when the data set fits within it’s cache, but suffers quite a performance drop once the data set gets larger. Despite being 30Mhz slower the SS is faster for data sets small enough for its cache but too large to fit in the HSs cache. I suspect this gap would be widest at just below 1Mb data size, but the program didn’t allow control over that. The worst data point shows the HS as 44% slower per clock. This is quite surprising, as the SS is not much faster than the Mbus speed (only 10Mhz faster) I didn’t expect the advantage in that data size to be so large. After 1Mb data size is exceeded, the HS starts to catch up again, but the data points don’t get large enough to know if it ever achieves equal relative performance again. I’d imagine that once the data is large enough both modules would perform close to the same as memory bandwidth becomes the limiting factor.

The next benchmark is similar in that there is measurement over a range of data sizes, but the algorithm is significantly different. The algorithm used is heapsort, a relatively efficient sorting algorithm used in many places. You can find more details here on Wikipedia. One of it’s characteristics is that it is much more cache friendly. Again I omitted testing the dual 50Mhz SS.

Looking at the graphs this test really requires some points at larger data sizes. I can only really guess, but I’d imagine that the performance would eventually converge given that memory bandwidth would eventually become the dominant factor. The previous test indicates that there may even be a window in which the SS performs better, but without actual data we will never know.

Given that I’ll be using this machine as a desktop workstation I ran a benchmark known as x11perf. It simply tests the maximum speed of components of the X11 protocol. It’s often known just as X for short, and is basically the software that unix systems use to interface to video displays.The chart shows performance relative to the dual 50Mhz SS (the yellow line represents it). A 2 is twice and fast, and 0.5 is half as fast. Each point on the X axis is a test, like line drawing for instance, there are so many tests (over 300) it wasn’t practical to separate and chart them individually. Out of interest I ran the dual 50Mhz SS with a MP kernel to see if it made any appreciable difference.

There are some quite interesting features of this chart. Firstly you’ll notice that both the faster modules have tests that are significantly slower than the dual SS (30-35% slower at worst). This is because those tests are CPU bound, and with a dual CPU module both the X server and client can have a whole CPU to itself. Typically those tests involve little actual drawing to screen, like plotting points.

In general the dual 50Mhz SS is slower than the faster modules. The SS @ 60Mhz is about 1.15 times faster on average and the HS is 1.75 times faster on average. The HS is in general the best on the raw performance numbers, with some odd exceptions. Some tests seem to favour the SS @ 60Mhz, which would be down to cache size.

Relative to their clock speed, the 60Mhz SS does better than the HS, but I’d imagine this would be due to the SBus limiting the maximum through put to the frame buffer. The SBus only runs @ 25Mhz so is almost certainly going to slow down a faster CPU when drawing.

The last and final test is one called Ramspeed. It’s basically designed to measure the memory bandwidth. I opted for the more general integer and floating point tests over the specific reading and writing tests as they are more likely to represent a computational load. There are 4 tests, Copy creates two buffers and copies data from one to the other, Scale creates two buffers and copies data from one to the other, but scales the number by some constant, finally Triad creates 3 buffers and adds two of them together (scaling one by a constant factor) and storing the result in the third buffer. All buffers are the same size. The tests I’ve chosen only test with buffers that are 32Mb in size, so much larger than the caches of either of the modules. You can select the buffer size and some tests available in the program test a range of sizes.

The results are pretty bad for the HS, it achieves slightly better speed only for the copy operations, which shouldn’t be surprising as the Mbus should be a limiting factor. However for the other tests the SS performs quite a bit better, so much in fact I ran the tests many times just to make sure. This would appear to be down to the memory and cache architecture of the modules, not just the cache size, although that is certainly playing an important role in the HS failing to perform. The HS does have significantly smaller L1 cache only having a 8k instruction cache versus a 20k Instruction and 16k data L1 cache in the SS core.

Having now spent a couple of weeks testing these modules I think we’re starting to get a picture of what these chips can do relative to each other. The HS is clearly faster as long as any data isn’t larger than its cache. The SS on the other hand isn’t as fast at it’s peak, largely due to a lower clock speed, but handles larger data sets significantly better. The X11 test showed that it is quite beneficial to have multiple CPUs in a workstation, even if only for basic X11 applications. However it also shows the HS being quite a good choice. I think the tests also show there was some merit to the idea that the SS modules performed better relative to their clock speed, but it also shows this is highly dependent on the work load.

So what am I going with and what would I recommend. With the hardware I have I’ll use the HS @ 90 for running the machine as a workstation as that makes it snappier to use in general. The flip side is that if I were to use the machine for a computational load, such as compiling a number of packages, number cruching, or a basic server the two SS modules would almost certainly perform much better as long as the job could be divided between the CPUs. Even the SS @ 60Mhz has a good chance of doing computation better on it’s own. The HS on it’s own is disadvantaged by not being able to multi-task as well, I have noticed that X is in general less responsive when the machine is under load (compared to both SS modules together), so a second HS module would probably be a nice addition in the future.

If money was no object and I could have any parts at all, both Ross and Sun had decent offerings. The fastest SS is 85-90Mhz, two of these would certainly be quite fast. However I’d imagine they probably wouldn’t be as fast as any pair of HS modules over 125Mhz. So in the end the HS modules would be the way to go if you had access to anything. As it stands, looking around online it’s actually really hard to find faster modules for a reasonable price. Among the SS modules those over 60Mhz are quite expensive and largely not available. The HS parts have a similar problem, but you can get 90Mhz – 133Mhz parts at fairly decent prices, although faster modules still command a high price, and slower modules wouldn’t be worth it. Again with what’s available the HS seems the way to go.

I’ve tried to be as thorough as possible, but if you want to see the raw data  and gnumeric spreadsheet with calculations and charts you can find them here.

28
Mar
18

Numjump for DOS

Today’s game is another homebrew made by Daniel Remar in 2017. It’s quite interesting as I’d describe it as a turn-based puzzle platform game, an odd combination indeed. He wrote it using QBasic and has included the source code along with some binaries compiled for 16bit MS-DOS as well as 64bit windows (using QB64 as the compiler).

In technical terms the game is fairly basic, it’s essentially using a 40×25 text mode with 16 colours and the PC speaker for sound. Whilst simple, it’s very effective, and the game is quite nice to look at for a text mode game. Sound is quite sparse, with few effects at all, but they are appropriate and don’t become annoying the way some games can become. Looking at the code, this could be ported to anything with a decent Basic interpreter and a 40 column display mode.

What makes the game odd and interesting is the mechanics of it. Your character only really has two goals, collect gold and reach the exit. In order to do this you need to jump around a small level avoiding obstacles that trap or kill you. The player moves one step at a time. The jump mechanic is a bit hard to describe. You have a maximum jump power, which is the number of steps you can travel vertically. For each step vertically you take you can take a step horizontally left or right. Once out of steps you must fall to the ground. You can fall at any time in the process by pressing the down arrow.

It’s a bit tricky at first, but once you get the hang of moving around it works quite well.

The level design works well with these mechanics to make for a challenging but not punishing experience. If you do happen to fall foul of a trap, the level is simply reset without any further penalty. The traps are fairly basic, there are spikes, Laser barriers that can be toggled, and some moving obstacles where timing is critical. Some areas can be inaccessible until you’ve increased your jump power, and others require a bit of thought to find your way in, but in general the puzzles are solvable in a reasonably short period without being easy.

Numjump is fairly short, you can finish it within an hour, but the length feels just about right. You get just enough of each type of hazard and puzzle to feel satisfied, but not so much as to become repetitive. If you’ve finished the game, you can go looking for all the secret dots, or make an attempt at collecting all the gold for the reward of a secret level, one for each achievement. It’s fun and well designed, so I’d say it’s definitely worth a go. I downloaded it from the Dos Haven site here, the official site there is linked to the authors twitter account.

This slideshow requires JavaScript.

20
Mar
18

Trying the Campbell Cassette Interface

Some time ago I acquired an interesting bit of vintage tech, the Campbell Scientific C20 cassette interface. Since then it had been sitting on my bench looking lonely, I decided that I should at least try it out before I salvage any of the many useful chips it has inside. I have found a user manual for it along with information confirming that it is indeed what I thought it was, an interface for reading data encoded on audio cassettes by data loggers.

Not having any audio cassettes with appropriately encoded data however created a very simple issue. How exactly should I get it to do anything at all? It turns out whilst the C20 is primarily designed for reading from tape, it is possible to get it to write one as well, I thought we might as well look at the encoding on my oscilloscope and record a sample of the audio.

So I connected my oscilloscope to the output and a serial line to my old MS-DOS machine. After twiddling with the serial settings both on the machine and in Kermit I managed to get a welcome message and menu from the device which confirms that at least the CPU and serial lines are working. Unfortunately this is about as far as I have gotten.

The manual is exceptionally useful, providing not only information about basic use, but also more detailed technical information and example programs in basic for operating the device. I’ve translated one of these programs for writing data to tape, it seems the device is receiving the data I’m sending, but nothing appears on the output that I can see. I’ve not worked out if it’s something as simple as not connecting the scope correctly or if there is some hardware failure.

So unfortunately not as much to report as I’d like, but time has been quite limited and something is better than nothing. I’ll keep trying in the short term.

21
Feb
18

MagiDuck for DOS

I was browsing the web recently when I stumbled across DOS Haven, A site devoted to home brew games made for MS-DOS machines. This is a welcome and quite unusual find as there isn’t much of a home brew scene for these machines as opposed to other platforms like the C64 or MSX which have a larger and thriving home brew community.

Though not featured on DOS Haven I found today’s game from a news item there. MagiDuck is an action platform game made for the IBM PC. It was made by Toni Svenstrîm with the latest beta release in 2016. It has especially low system requirements, only needing an 8088 @ 4.77 Mhz, CGA and 256K of RAM which covers pretty much almost any MS-DOS machine except those with MDA displays or small amounts of memory. The low system requirements come about partly because of the graphics mode used, which is a hacked text mode that allows for 80×50 with 16 colours similar but not the same as that used in Paku Paku.

Although the graphics are quite blocky due to the low resolution, the artwork is of quite high quality. Magiduck, the enemies and the levels are all colourful and cute. On the technical side the game animates quite smoothly on even minimal hardware and even manages vertical scrolling. Because early PCs didn’t have sound cards only PC speaker is supported, and the sound is fairly good for that device.

The game controls and responds quite well in a way that most PC platform games do. Although the key layout is a little different, z and x are used for jump and fire, it works just as well as the usual control and alt key layout. Magi jumps and moves as you’d expect, jumping around is fairly straight forward, which is good because the levels are quite vertical. Each level is basically a tower, you start at the bottom and work your way up to a star which represents the end.

I quite like the level design, like the sprites they are colourful and fun. There is some challenge, but not so hard as to be painfully difficult. Whilst they are quite narrow (a limitation of the engine is seems) there are a number of paths of varying difficulty through each level. You can spend time collecting treasure and keys from all the paths for extra points, or speed run the game for a time bonus.

Magiduck is technically very impressive and is very well designed and built. It does have some minor flaws, but generally they don’t impact getting enjoyment out of it. The hardware it can run on is very impressive, the original IBM PC was not considered capable of scrolling colourful graphics until later machines got much more powerful and the first EGA/VGA cards became common place. This game can do it on an original PC @ 4.77 Mhz and a CGA card. If you own an old machine this is certainly something you should give a try, you can find it on IndieDB here.

This slideshow requires JavaScript.

22
Jan
18

Open Access for DOS

Open Access is an suite of office software made by a German company called Software Products International (SPI). We got this software with our first PC early 1990, but the software copyright is for 1986. Dad used the spreadsheet function to manage the farms finances until we upgraded to using Works a few years later, I used to experiment with the word processor and graphics (charting). I’ve been meaning to post about this program for a while, but I had difficulties getting it to install and work properly under dosbox. I eventually had to resort to a full machine emulator (pcEM), which would allow me to use all the components.

The word processor module is fairly simple, typing a document is fairly straight forward, but there are few features built in. Some notable omissions are a lack of spell checker and thesaurus. There are only a few formatting options, basically the usual bold underline and italics which are displayed as different colours. As a text editor it does serve it’s purpose reasonably well, but isn’t as easy to use as something like Works.

The spreadsheet module got the most use, mostly from my Dad. He said it was a big improvement over doing the books by hand, which was a tedious and time consuming job. This module is much more feature rich and would probably have been comparable with contemporary competitors. Something I did notice whilst playing around was that the formula system has a different syntax that I don’t remember. It’s possibly the same as Lotus 1-2-3 (as that was seen as a standard) but without the manual I can’t make full use of it. Something notable here is that values in formulae do not update automatically, you have to select the recalculate option to refresh those cells.

This slideshow requires JavaScript.

The graphics module was for creating charts. It supported CGA and EGA graphics modes for creating the charts, although the emulation I have can only demonstrate the CGA mode. There isn’t any facility for entering data, instead you need to export a selection of data from either the spreadsheet or database module.

The info management (database) module was something we never used, and I wasn’t able to work out how to use it well enough to get a decent screen shot. Ironically it was one of the most popular features of the software that made it useful for many people. It supported a subset of the SQL language and was capable of storing what was an exceptional amount of data for the time. Later versions included a dialect of BASIC called PRO which made it a platform for developing database driven applications.

Using Open Access is not very intuitive for a user today as the interface is designed around using function keys on the keyboard. Unfortunately there isn’t really much documentation within the program itself, instead it comes with extensive printed documentation (which is presently at my parents place). At the time it was released this was completely normal, and if you used it often enough you’d soon remember all the function keys, so it wasn’t seen as a downside.

14
Dec
17

SS20 Desktop: Kernel Issues

Over the past few weeks I’ve been continuing my work trying to get the latest NetBSD working on my Sparcstation 20. The system has been hanging and I’d had trouble working out why, so I turned to reading as much as I could to see if I could find any clues. I found in the mailing list someone suggesting that not all SCSI drives are co-operative with the on board controller when running a MP (multi-processor) kernel on later versions, so I looked through my collection of SCA drives to see if I had a different model I could try. I found I had an IBM Ultrastar disk that is around 18G in size, so I swapped the Fujitsu drive (model MAJ3182MC) out for it. Surprisingly this made my system behave much better, it would install, and run on the uni-processor kernel with no issues at all where the fujitsu drives seemed to cause the system to hang frequently under disk access.

However booting with a MP kernel still would hang within about 20 minutes or during disk access, so it was at this point I joined the mailing list to ask others what I could do to resolve the issue. The people on the list are quite friendly and have been very helpful in trouble shooting. It seems that there are some kernel bugs related to MP that are present in 7.1 that are at least partially resolved in more recent versions of the kernel. Like most open source OS’s the current stable release is behind by a version or two from where the developers are currently working. It seems that there is some possibility of the fix being back-ported to 7.1, I tested out a patched MP kernel that was greatly improved in this respect. It still hung, but after a much longer period of time, and only when provoked by a specific program. Feedback from the mailing list also seems to indicate that choosing not to use the on board SCSI is another way that I could work around the problem.

So I now have multiple options for running my system. I could switch to using a single processor, I’d have the option of either a 60Mhz SuperSparc (currently installed with a dual 50Mhz module) or 75Mhz Ross HyperSparc, and everything should work well. Alternatively I could acquire an SBus SCSI card to connect my hard drives, or forgo a local disk entirely by using networking booting and a NFS share, both avoiding having to use the on board SCSI. Finally I could use the system as it is now with the patched 7.1 kernel, it worked well enough that this is quite feasible. I’m leaning towards booting the machine over the network at the moment.

In the short term with Christmas approaching, I’ll be putting the project aside until I have more time in the new year.




Blogs I Follow

Enter your email address to follow this blog and receive notifications of new posts by email.

Advertisements

Mister G Kids

A daily comic about real stuff little kids say in school. By Matt Gajdoš

Random Battles: my life long level grind

completing every RPG, ever.

Gough's Tech Zone

Reversing the mindless enslavement of humans by technology.

Retrocosm's Vintage Computing, Tech & Scale RC Blog

Random mutterings on retro computing, old technology, some new, plus radio controlled scale modelling.

ancientelectronics

retro computing and gaming plus a little more

Retrocomputing with 90's SPARC

21st-Century computing, the hard way

lazygamereviews

MS-DOS game reviews, retro ramblings and more...