Posts Tagged ‘software

01
Oct
19

Old Tech: The AccessGrid

Today I’d like to talk about an old technology that has mostly died off and I used to use rather extensively in my job. The AccessGrid as it was known was an advanced teleconferencing system used world-wide for remote teaching and academic collaboration. Rather than being one single technology such as Skype, it was a collection of open source software that worked together, the main client simply being the glue that co-ordinated the meeting system. The main software was initially created in 1998 by Argonne National Laboratory and maintained by them until it was later made open source and supported by the community.

I was involved in running one of the AccessGrid nodes for my local university, mostly for the purposes of remote teaching. The rooms (also known as nodes) were set up with the ideal that the technology should be as transparent as possible for students and teachers. Most sites had a technician (usually referred to as the operator) that ran the equipment so that participants in a session didn’t have to manage the technology on top of their normal activity. The operator also usually participated with the AG community at large, helping each other with technical issues and testing the software and hardware configuration of nodes. I was the operator for our node, and am still involved in supporting remote teaching today.

A Typical AccessGrid node at University of Newcastle Australia

The room was equipped similarly to a class room, but with extra equipment to capture as much as possible. The front of the room had smart boards for writing notes and displaying lecture slides. For tutorial sessions students both remote and local could present solutions on the smart boards, although the exact technical solution used to provide this varied depending on the participating nodes. We had a number of cameras so all the local participants could be seen, and ceiling and lapel mics so students and teachers could be heard. These would usually be adjusted to some degree to suite each session, although sensible defaults would usually work fairly well.

Audio and Video was sent between clients using RTP (Real-time Transport Protocol) using Multicast UDP packets to transport the data. Because support for Multicast traffic isn’t universal and had been blocked at some institutions Unicast bridges were set up. These bridges allowed people with out Multicast support on their local network to connect to meetings, these bridges were run by nodes which had working Multicast support. Users could manually select which bridge to use to avoid high latency or traffic load.

Rat (robust audio tool) was the program that sent and received audio. It had options for many different bit rates and audio encodings and worked quite well on most platforms and audio equipment. It did have some basic echo canceling capability, but that usually wasn’t used as most nodes opted for hardware based echo canceling with devices such as the ClearOne XAP800 which generally did a better job. A notable feature of the software was the ability to tune the audio volume of each participant individually, which made it much easier to cope with audio issues as it could be adjusted on the fly. This unfortunately seems to be an unusual feature on modern communication software which often doesn’t allow this to be easily done.

Vic (Video conferencing tool) did the video portion of the session, by using multiple instances of this program it was possible for each node to transmit and receive video from multiple sources, usually cameras but also live screen captures from another program. This allowed a Node to send a video of the teacher, any local audience members, and multiple screen captures. Large sessions with many participants could have a large number of video streams, I remember seeing 15-20 streams for the bigger events. Generally it scaled fairly well, but you needed a decent internet connection.

The AccessGrid for Australian universities died rather unceremoniously and suddenly when the server was switched off most of the way through semester 2 in 2014. The person who was maintaining the server had left the institution where it was hosted, so when their server room was renovated it was decommissioned without any plans to reinstate the service. This happened with no announcements or notice, just one day it was suddenly dead. This left the still significant number of people using it for remote teaching scrambling to find alternative solutions as quickly as possible, thankfully most people managed, but it wasn’t fun.

Had the server not died, would the AccessGrid still be in use today? The answer is probably not, but maybe. As a technology it was harder to use and required significant technical knowledge. Modern software has largely taken that complexity and difficulty away, unfortunately taking some of the flexibility away with it. Commercial software often requires a license fee for the server at least, but in some cases also for the client software. This extra cost was off-putting to smaller institutions who don’t have the larger resources others do, so that may have motivated some to stick with it.

So why wax all nostalgic about it? Partly because no-one else has and the foot print the AccessGrid has on the internet is gradually fading. Also it was an interesting and formative technology in the electronic teaching space. It achieved results that at the time were not possible with other technologies enabling students access to courses they otherwise couldn’t reach, and Lecturers access to a wider audience. For me personally it was memorable being a part of the community and making the technology work. Whilst it had its problems it was interesting, functional, and flexible.

03
Oct
12

The C/C++ programming language

C and it’s derivative C++are perhaps both the most popular programming languages currently in use. C has it’s root in the early 1970’s in AT&T’s Bell Labs, it quickly became used for low level programming of kernels, and many of the early Unix operating systems were written with it. C++ was developed based on C later the same decade to add the object oriented model of programming which was relatively new at the time. You can find much more information about both the languages on Wikipedia here for C and here for C++.

I’ve not got as much expertise with either language as I do with others, I’ve written a few small applications and found that it was indeed possible to write nice neat code that was readable and worked well. However I’ve also had many issues in particular trying to read other peoples code and working with the build systems often used by implementations of both languages.

C is by far the simpler of the two languages, and is often compared to it’s contemporary Pascal which was developed around the same time. Some people regard Pascal as inferior to C (and I imagine that there are many that believe the opposite) but the reality is that both languages are very similar in capability. I’ve written embedded software with it back when I was at Uni and found it was well suited to the task, and would work well developing software even for larger systems. The main problem I have with the language is the extensive use of symbols, that many programmers abuse and end up with unreadable code. Fortunately it isn’t as bad as it could be, and providing people comment their code it is somewhat readable.

C++ introduced the object oriented model, but did not exclude people from continuing to use the procedural model. This is interesting as it allows people to mix and match, writing parts of their software in the methodology that suites it. I wrote an experimental program many years ago using C++ and was able to generate some reasonably readable code. The problem is that it is also quite easy to write some terribly difficult to read code as well. With the object oriented features utilizing pointers much more, some sections of code start to look more like random binary code than something that is readable. This makes the design of the software much more important.

The build system is different from platform to platform, but most of them are make systems describing how to build the software. I found it a bit difficult to deal with make files for similar reasons I find the languages difficult. There are lots of difficult to read symbols around that make reading a single line an exercise in de-coding. On the other hand the various versions of make are generally more powerful than the base compilers. Make technically isn’t necessary to build software in either language, you could manually write a script, but it is the most commonly used type of tool, although Microsoft Visual Studio has moved away from using it.

One of the main problems I’ve found with both languages is that the standard API varies greatly from platform to platform.  This results in large amounts of confusing conditional defines to try and help code be a bit more cross platform. It’s also been difficult for me to get used to the large number of symbols commonly used. It often takes me some time to work out where and how pointers are being de-referenced. I’ve also found finding and reading documentation for the API has been difficult as well. I understand there is a Javadoc like system (doxygen) that can be used to generate documentation, I’d like to see this (or something like javadoc) used for the API documentation to make it a bit easier. It’s not that API’s are hard to use, just the documentation is.

Most of what I consider a problem is nothing to do with either language, but with how many programmer (especially amateur ones) use them. Comments are lacking from code, in some cases the only comment is the header describing the license for the software. People seem to often over use the define pre-processor construct which makes reading code very difficult.Variable names are often confusing, short, and not really descriptive, which wouldn’t be as much of a problem if there was a comment describing what it is for, but this basically never happens. There were such bad examples of code in early Unix systems that an obfuscation competition was started in response to some terrible C code. It still runs today, and you can find their website here.

I think most of the software would be much more readable if people applied more professional coding standards, and used the automated documentation creation tools like doxygen so comments within source files can be the documentation. Sometimes I wonder if people make it difficult for the sake of looking smarter or making it harder for everyone else to understand.




Enter your email address to follow this blog and receive notifications of new posts by email.


Mister G Kids

A daily comic about real stuff little kids say in school. By Matt Gajdoš

Random Battles: my life long level grind

completing every RPG, ever.

Gough's Tech Zone

Reversing the mindless enslavement of humans by technology.

Retrocosm's Vintage Computing, Tech & Scale RC Blog

Random mutterings on retro computing, old technology, some new, plus radio controlled scale modelling.

ancientelectronics

retro computing and gaming plus a little more

Retrocomputing with 90's SPARC

21st-Century computing, the hard way