Best practice method for programming games for the Commodore 64

With today’s technology it’s easier than ever to program software for the Commodore 64. The user has the choice between crosscompilers (cc65), can use the Forth programming language or an emulator. But which technique is the best one?

If the aim is to save as much development time as possible the best practice method is first to develop a prototype on the PC with the Python language. A good starting point is pygame, but other libraries are working well to. And if the prototype works great, the game gets ported to the Commodore 64. That means, the software is written again but this time in Assembly language to get the maximum speed. The advantage of combining a prototype with lowlevel assembly programming is that the game can be tested before it’s implemented in assembly. And thanks assembly, the game will be very fast. Using Python alone won’t work because the C64 doesn’t provide a Python interpreter, and programming only in Assembly and avoid a prototype will result into a demanding task. That means, the programmer has to solve complex register level optimizations without knowing how the game will look like, if it’s ready.

The funny thing is, that Assembly programming for the C64 is easy, if a prototype is available which defines who the game should look like. Because than the programmer can focus only on the implementation part. He doesn’t have to invent the game itself, but has to rewrite an existing game. Doing so is even with pure Assembly not very hard.

Advertisements

The difference between the 1980s and the 2000s

The 80s was a breakthrough in many subjects. It was not only the beginning of a new decade after the boring 70s, it was at the same time the beginning of a new century. The beginning of 2000s century can be dated back into the year 1980, because from a technical and social perspective very new things are started. For example the microcomputer. The 80s was the decade of the Commodore 64, the IBM PC, the MS-DOS operating system and the first modems. In the domain of television in the 80s the VHS recorder has emerged, private cable television, the MTV music channel and electronic dance music.

But if the 1980s were so great, what was missing in the decade? If we are focus on the decade of the 2000s we will recognize some points which were not invented in the 1980s but can be seen as important cultural breakthrough. That means, the 80s have provides many things but not all. For example reality Television, notably Big Brother which was invented in the late 1990s by the dutch company Endemol. Explaining Reality TV is a bit complicated, because it’s radical different from what it’s known before. Reality tv isn’t a movie and it isn’t a tv series which is following a script, instead it’s more like a news-channel with a 24 hours program. But the content has nothing to do with important political events but it’s the life of normal people which is seen on the screen. Another breakthrough of the 2000s was the broadband internet connection for private households which wasn’t there in the 1980s and the upraising of fulltext searchengines in the internet.

Is the Commodore 64 dead? (was: Poll What’s next on this blog?)

Update 2018-10-04

The vote is over. The topic “Commodore 64” has won. Here comes the blogpost.

Is the Commodore 64 dead?

Modern computers are similar to what is called a “Unix workstation”. In the late 1980s these machines were sold under the brandname NEXT cube, SUN workstation or UNIX workstation. Especially, if the RHEL operating system gets installed on a modern PC the system looks very similar to older workstations. It was designed for video editing, software development and internet working.

In contrast, other directions in the 1980s and the 1990s can be called a dead end. For example the homecomputer evolution of the Commodore Amiga was a dead end. It was not competitive to IBM PC. Also the WIndows operating system isn’t competitive to the Linux development. Combining IBM like hardware with Unix-like operating system is the most powerful system ever. It is the cheapest one and it is future ready because Open Source software is used.

But, there is a small machine, called “Commodore 64” which is even for today’s eyes attractive. Not because the system has survived. But because it has a unique ideology. The advantage of the Commodore 64 to all other computers in the 1980s was it’s price. It was a lowend mainstream system comparable to today’s Android smartphones and the idea of Commodore was to flood the market with 8bit homecomputers. And they were successful, no other system was sold more often. Sure, from it’s technical specification the C64 was in disadvantage. It’s graphics capabilities were lower than 16bit homecomputers, the GEOS graphical system doesn’t work fast enough to be comparable to MS-Windows, and in contrast to Unix machines, the C-64 had no ethernet connection.

In one thing the Commodore 64 was great and superior. It was a computer system for the masses. The computer was sold in department stores and it was affordable to everyone. The Atari ST or the NEXT cube were interesting hardware platform for his time, and they evolved into modern computers used today, but in 1980s there were not cheap. That means the Atari ST was used by professional music studios and the NEXT cube was some kind of ultra-futuristic device which was sold in low numbers. The Commodore 64 gave an outlook what a computer revolution is. It is equal to reducing the costs downto zero.

The retail costs in 1983 was USD 595. In today’s numbers this would be equal to US$ 1400. Later the retail price was reduced to US$ 149. This is called a low price strategy. The idea is to fabricate the system as cheap as possible and increase the number of sales. The result was, that even people without a technical background and who are not planning to learn programming are bought the device.

Programming in 8bit

The Commodore 64 was released 30 years ago. Is it possible to program such a machine with today’s knowledge? Yes and no. At first there are tutorials available. The Commodore 64 community has produced many software over the years. Most programs were created with 6502 Assembly language. For today’s eyes it makes more sense to use the CC65 crosscompiler to program in C. But, the C language is not the perfect choice for a Commodore 64. The reason is, that C was developed together with the UNIX operating system. And Unix is similar to mainframe computers which have large amount of RAM and huge external disks. The problem is, that it is not possible to port the UNIX operating system to the Commodore 64.

But there is perhaps a language between both. Let us observe how the Forth language works on 8bit computers. The good news is, that Forth needs only a low amount of RAM. No cross compiler is needed. The second advantage is, that Forth is at foremost a Macro-assembler. That means, the user has assembly command which he can structure into Forth words. And this allows (in theory) to write sourcecode faster.

The idea behind Forth is, that the user starts with the normal Assembly commands which are available for the 6502. And builds on top a dedicated operating system which needs only a small amount of RAM.

Object oriented programming for the Commodore 64

Despite the existence of modern PC based desktop computers there is a need to explore the 8bit homecomputer Commodore 64 a bit in detail. The main problem is, that not enough software was created for that machine yet. But programming software from scratch is difficult. The main problem is, that modern object-oriented programming languages like C++ are not available. Perhaps this challenge can be overcome with a normal C compiler. A c-compiler is available for the C64 which is called cc65. The difference between C and C++ is that C++ supports classes. But what is a class?

A class can be seen as a submodule which contains of methods and variables. The unique feature is, that the methods are allowed to access all the variables. In a normal C program this can be realized with global variables. What we need is a number of c-files: file1.c, file2.c and file.c And each file is equal to a class. On top of the file, some variables are given, for example integer and string variables. And the c-functions have access to these variables.

It is not real object oriented programming but it comes close. The idea is, that the programmer writes normal c program code, separates the submodules of the software into different files, and this allows him to create large scale applications. The interesting aspect is, that even for today’s programmers it is very hard to occupy the complete 64 kb RAM of a commodore 64 PC. The assumption is that 1 kb is equal to 40 lines of code. That means, in 64 kb around 2560 lines of code can be stored. I would call this a large scale program. Most amateur programmers are writing programs which are smaller than 1000 lines of code.

Computer Chronicles is an accurate computer museum

Getting realistic information about the past in computing is usually stored in museums. The biggest one is called “Computer history museum” and is located in Silicon Valley. It contains mostly of outdated hardware. A much more realistic way to experience outdated technology in real life is the tv-show Computer Chronicles. In the episode 11×07 “The internet (1993)” the beginning of online-communication was explained. In the show, not webbrowsers but gopher, telnet and ftp was used for getting access. In a side note the first Mosaic browser was mentioned, but it was too advanced for a life demonstration.

The early gopher and ftp based internet was mostly UNIX dominated. That means, the user had a terminal emulator on the screen and can put in commands. And then he get access to remote websites.

It is interesting, that an introduction to the internet itself was only available in the early 1990s. Today, such a tv show doesn’t make much sense, because the Internet is already there and only minor aspects can be discussed in the media. But in the year 1993 the internet was not invented yet. It was something which was only available in the people’s mind, it was a vision about the future.

What was wrong with the 1970s?

From a technical perspective the 70s was a great decade. Color television was there, videocameras too, the IBM 705 homecomputer worked great with 40 kb RAM and all the latest scientific papers were published on microfilm. But there was a small problem, the price. The IBM computer wasn’t sold like the C-64 for a mass audiance but it costs one million US$. The microfilm had a great resolution but the medium was expensive and the same was true for color television. From a technical perspective it was similar with today’s HDTV television but the price was huge.

What the decades after the 70s have realized wasn’t new technology and it wasn’t the internet but it was a massive cost reduction. That means, to transer a single bit over a telephone line becomes cheaper, recording a scene with a camera was affordable for everyone and be the prowd owner of a homecomputer was normal. The revolution had nothing to do with new cutlure or a different society, but with a reduced pricetag. That means, the same products available in the 70s were also sold in the 80s, but they cost 100x times less. This work hypothesis helps a lot to imagine the 70s. We have to only assume, that the price reduction will be taken back. Each decade the price is 100x more expensive. Let us make an example.

The iPhone costs in the now around 500 US$. The same product will cost in the 2000 100×500=50000 US$, the same device will cost in the 1990s 5 million us$, in the 80s 500 million and in the 70s around 50 billion us$. I don’t know if an iphone was sold in the 70s but if it was there it would costs this high price.

In the 70s there was no problem with the Internet, video compression or high-speed computers. The only problem which was there in the 70s was the economy. That means, the costs were high and nobody was able to buy consumer technology. I’m referencing to this problem because the same problem is visible in the 2010. Robots, Nanotechnology, holography and fast internet connection is technology invented right now. For example, the Honda Asimo robot is able to walk, run, open a bottle and he can even fall downstairs … The only problem with the device is, that it costs 100 million US$ each. If somebody is rich, he can buy an Asimo service robot today, but most people are not able to spend such amount of money.

From a technological perspective a computer contains of RAM, software and a cpu. But this description leaves out the economical dimension. A computer is at first a product. There is on the first hand a consumer and on the other side the supplier. To analyze new technology we must focus on the economic situation of the computer manufacturing company. The inventor of the ASIMO robot is Honda. They have a profile as a technology company compared to IBM in the 60s. That means the company is very powerful, has invented lots of things, and many of them are very advanced. Now let us focus how cheap Honda provides his knowledge. How many papers have the company uploaded to Google Scholar which describes how to build a robot? Answer: zero. That means, the company has a huge amount of robots, but it isn’t ready to share it for free. That means, the robot is available, the knowledge for programming the software is there but for the enduser the price is extreme high to get one of these items.

The consumer has the option to wait. If he waits 10 years from now, robots and papers about robot will become much cheaper. If he waits another 10 years, he can buy a service robot in his local store. Not because this technology will be invented in 20 years from now, but it would take so long until the price is low enough for ordinary customers.

The main problem in robotics is to invent the technology, but the problem is to reduce the price of technology which is already there. A low price is equal to reproduce technology from the past.

The latest invention of Honda is the E2-DR disaster response robot. From a technical point of view the device is great. It is the advanced system ever invented and contains lots of great patents. The only problem is the pricetag. Today, it is unknown how much the system costs, but it would be probably more then the Honda Asimo. So I would guess, that the out-of-the box version of E2-DR is sold for 1 billion US$ to the customers. But how would the world look like if the device is sold for 100 US$? I mean, the same product, but only cheaper. Yes, it would be equal to a revolution. Not technology prevents future but the pricetag.

Seit wann gibt es Computer?

Der erste Computer wurde in den 1930’er Jahren gebaut, oder etwa doch nicht? Schaut man sich die Geschichte der Rechenmaschinen an, so findet man bei sehr genauem Suchen viele Indizien die darauf hindeuten, dass schon weitaus früher Computer in Betrieb waren. Prominenter Fall dürfte wohl Ada Lovelace sein. Lange Zeit hat man angenommen, dass die Rechenmaschine von Charles Babbage entweder gar nicht gebaut wurde, oder aber noch kein Computer war. In neuerer Zeit hat sich jedoch herausgestellt, dass die alkohol- und spielsüchtige Ada Lovelace bereits Mini-Programme erstellt hat. So ähnlich ist es auch mit den Rechenmaschinen von Leibniz. Heute gibt es nur noch ungenaue Aufzeichnungen darüber, was Leibniz gebaut hat und was nicht. Schaut man jedoch etwas näher in die Vergangenheit so erkennt man, dass die Lochkartentechnik bereits vor dem Jacquardwebstuhl bekannt war und zwar hat sie nachweislich ein Herr Jean Baptiste Falcon verwendet. Es wäre also denkbar, wenn auch sehr unwahrscheinlich, dass Leibniz bereits die Analytical Engine konstruiert hat. Vom mechanischen Aufbau her wäre es möglich gewesen in der damaligen Zeit, bekanntlich reicht ein Meccano Spielzeugkasten aus, um eine derartige Machine zu bauen.

Das Problem ist nur, dass es im Grunde keine Rolle spielt ob der Computer schon früher bekannt war, dann aber wieder in Vergessenheit geriet. Denn die ersten echten Computer, die auch für nachfolgende Generationen nachvollziehbar realisiert wurden gab es erst von Konrad Zuse und späteren Bastlern. Es geht also nicht um die Maschine als solches, sondern es geht darum die Maschine zu veröffentlichen. Das hat Leibniz nachweislich nicht getan. Hätte er einen funktionsfähigen Computer gebaut und ihn der Nachwelt überliefert, dann würde er heute in einem Musum stehen, das tut er aber nicht. Vielleicht hat Leibniz ihn gebaut, vielleicht auch nicht, wenn man wissen will wie Computer funktionieren kann man sich nur an jenen Modellen orientieren die sehr viel später erfunden wurden. Aber das spielt eigentlich keine Rolle, weil das Konzept universell ist und unabhängig von Epochen, Erfindern und Überlieferungen funktionieren.

Nehmen wir mal an, es kommt zu einem 3. Weltkrieg und das komplette Wissen der Menschheit wird in einem Atomkrieg eingeäuschert. Natürlich würden alle Bibliotheken und alle Computer dabei vernichtet werden. Wäre damit die Idee des Computers auch erledigt? Wohl kaum, es würde nicht lange dauern, und die nachfolgende Generation würde das Konzept erneut für sich entdecken. Sie würde zuerst anfangen mit 10 Fingern zu zählen, dann würde sie eine Mathematik erfinden und wenig später die Notwendigkeit erkennen, diese zu automatisieren. Und schwubs wäre der Computer erneut in der Welt. Wenn überhaupt jemand als Erfinder für eine deartige Maschine in Frage kommt, dann nur der Herrgott selbst. Er hat sich das Konzept einmal ausgedacht als eine Art von Spiel, damit die Menschen wenn sie schon denken können sich nicht langweilen müssen, sondern immer etwas haben, an dem sie sich ausprobieren können. Man kann einen Computer wie eine Art von Geduldsspiel betrachten, vergleichbar mit einem Tangram oder einem Kreuzworträtsel. Es zu lösen bereitet Freude und deswegen hat dieses Spiel auch so viele Fans.

Vintage Computer shreddern

Aktuell schießen Computermuseuen wie Pilze aus dem Boden. Es gibt sogar eigene Festivals die sich mit älterer Computerhard- und Software beschäftigen. Ausgestellt sind dort Rechner von Commodore, Atari und IBM. Aber wie wäre es, wenn man zur Abwechslung alte Dinge nicht in einem Museum präsentiert sondern in einen Metall-Shredder zerkleinert? Ein sogeannanter E-Waste-Shredder ist der Höhepunkt eines echten Vintage Computer Festivals wo als Höhepunkt einige besonders wertvolle Sammlerstücke dem nimmersatten Zerkleinerer anvertraut wären. Leider gibt es dazu aktuell noch keine Videos auf Youtube. Was jedoch in einigen Videos aus dem Kontext “PC Shredding” zu sehen ist, waren zerkleinerte MS-DOS 6.22 Disketten, und Platinen die aus den 1990’er stammen und noch mit Pentium Prozessor ausgestattet waren. Wer traut sich, und shreddert einen goldenen C-64 (davon wurden nur wenige Stück produziert), oder wer shreddert eine funktionsfähige DEC PDP11 inkl. aller Magnetbänder?

Man könnte dochmal mit einem mobilen Shredder vor ein Computermuseum fahren … Na ja, jedenfalls gibt es da gewisse Gemeinsamkeiten. Beide Male geht es um ausrangierten Elektroschrott aus dem Profit gemacht gemacht. Beim Museum werden Eintrittsgelder kassiert und beim Elektroschrott-Verwerten geht es um die Metall-Rohstoffe.