Building a jumping robot, made easy

Regularly reader of the trollheaven blog are perhaps familiar with GUI interface shown in the screenshot above. It is the same used in the project before. Instead of balancing a ball, this time is a jumping robot in the middle. The surprising information is, that realizing different kind of robots has nothing to do with Artificial Intelligence directly, but with the preprocessing steps. Let us go into the details.

Before an AI -controller for a robot can be realized some kind of simulation environment is needed. A typical feature of such environments is, that they are manually controlled. That means, the system has no autonomous mode, but the user has to press keys and type in commands for moving the robot. Somebody may think, this the simulation environment is not a big think and is motivated to ignore the step, but in reality the environment is more important than the AI controller itself. What we see in the screen shot is a combination of Python/tkinter, pygame, box2d and a parser for motion primitives. All elements are important, it is not recommended to leave out the Tkinter gui or any other part.

Such an environment can be used to realize different kind of robots. In my first project a ball on beam situation was created, now a jumping robot is visible. It consists of a torso which is the larger box and two rotating cubes. On the cubes legs are mounted with a linearjoint. That means, it is not a lifelike dog, but a simplified version. The interesting feature is, that after entering the jump command, the linear joints are stretching fast and this let the robot jumps in the box2d engine. It looks highly realistic.

From a perspective of history, it is copy of the famous Marc Raibert jumping robots at the beginning of the MIT leg lab. I’ve seen the old videos and rebuild the structure in my robot simulator. Let us focus on the TK gui right at the screen. This amazing forms is the core of the simulator. It shows information, for example the current frame and the angle of the legs, but it’s also possible to enter textcommands interactively. The concept is not new, it was used before in the Karel the robot project which was a famous education tool for teaching structured programming with Pascal and Java. The idea is, that the robot has motion primitives which can activated. And the user has to enter the primitives in the textbox. For example, if the user enters “rotrback” the right leg will rotate backwards by a small amount. This allows the user to interact with the robot at the screen.

From a programming perspective the overall simulator isn’t very complicated. The project has exactly 400 lines of code so far. The GUI, the pygame, the box2d, the parser and the words are all included in this amount of codelines. What is missing is the automatic mode. That means, the AI itself isn’t available. Such an AI can be created by putting the words in a program, similar to the “Karel the robot” challenge. For example, the user lets the robot jumps, and checks in the air if the angle of the legs are right and then he executes another action. Like i mentioned before such a routine is missing right now. But it can be constructed easily.

What i want to explain is, that the robotics simulator is more important than the AI-controller which runs in the simulator. If the environment was programmed well it is easy to build on top the AI engine. Most robotics projects fail, because the environment is not working. The funny thing is, that a jumping robot is very easy to build. Because the jumping is not the result of the controller, but the jumping is calculated by the Box2D engine. Let us take a look, how jumping was realized in the robot simulator:


In 7 lines of code, the linearjoints of the Box2D engine gets for a short time a signal and then they stop. As a result, the box2d engine calculates, that the robot jumps into the air. That means, it is calculated by the physics engine and not by a highly developed AI. Entering the jump command in the textbox will activate the routine, and in the background, Box2D determines everything else.


In the Box2d vocabulary a jumping joint is called a PrismaticJoint. It can move in a linear fashion back and forth. The spring effect is the result of moving the servo with high speed. In mechanical engineering, such a device is called “linear actuator with a spring return mechanism”. It produces a unidirectional force and make every robot jump. That means, the jumping is not the result of wishful thinking or motivated by Artificial Intelligence, it is simply a mechanical feature. What can happen during the jump is, that the robot will loose his balance. That means the robot is not able to land on his legs but in a wrong way. This has to be adjusted by an AI-controller.


Tutorial for using git without branches

Somebody may suggest, that enough tutorials about using the version control system “git” are available. Nope, there is something which is explained in them wrong. A typical git tutorial explains in detail, what a branch is. Without any need, this make things more complicated. In the following text, a simplified git tutorial is given which needs only two commands and is working great for 95% of the users.

We are starting the journal to agile development by creating a new project folder. Then we are using the command #1 which is “git init”. This will create the hidden folder “.git” which can be made visible with “ls -lisa”. It will hold the version history and grows quickly together with the project. Now the project folder is ready to take some new files, which are created from scratch. After some example files were created we can use git command #2 which is the commit command but includes adding all files to the git tree.

mkdir project
cd project/
git init
touch a.txt
touch b.txt
git add --all && git commit -m "comment"
gedit a.txt
git add --all && git commit -m "comment2"
gedit a.txt
git add --all && git commit -m "comment3"

From now on, we will need only git command #2, which is the “git add & git commit” statement. The only thing which has to be modified is the comment section at the end. The workflow contains of making some changes in the project folder, for example add some text to a file and then the git command is executed. Other commands are not necessary.

The interesting question is, what will happen if the first project version is done, and we want to create version2 which is more stable. In a classical git tutorial this would be explained under the term branch. But it’s not a good idea to use branches as directories. The better idea is to ignore the git features and use the normal UNIX tool for creating subfolders, which is “mkdir new-version”. That means, all the files are stored into folders and the only available git branch is the master branch.

The advantage is, that there is no need for switching branches, merging branches and resolve conflicts. Instead, all the commits are executed by the same command, given before. The resulting version logfile looks similar to what the Wikipedia community is using. That means, all the users are writing the changes into the same master branch and there is no need to merge something between branches.

If somebody is working alone on a project this is the best practice method, and small teams will love the “single branch” idea too. Instead of creating branches, every user gets his own working directory. If somebody is trying out new things, he is executing the command “mkdir user2-prototype-may07”. That means, git is used like a filesystem but not like a project management tool. The advantage is, that apart from two simple commands “git init” and “git commit” no other actions are needed.

Programming techniques to speed up the development

The main bottleneck in software development is not the cpu speed, but the duration until the sourcecode works. The good news is, that some techniques are available which can speed up the development process. Here are a few of them:

1. in the prototyping step use a scripting language like Python. It is called a throw-away prototype because after the game is working, all the sourcecode gets deleted.

2. use a version control system like git

3. create always an interactive GUI. Which means, that before the first sprite is shown onto the screen, a textfield is there in which the developer can enter commands. Possible actions are update, printposition, or moving the sprite.

4. Use as much libraries as possible. Good examples for Python programmers are tkinter, pygame, box2d, pybullet, pybrain, tensorflow and so on.

5. split the project into files. Each file is a class, and each file has not more than 100 lines of code. Opening up many tabs in the IDE allows to switch to the right code segment without scrolling up and down through the sourcecode.

6. For each iteration of the prototype create a new git folder, for example 2018-11-01-testA, 2018-11-03-testB and 2018-12-21-testc. Never create git branches, because the branch feature is way more complicated than simple create new subfolders. Branching is a communication process with outer developers but not a command line option for git itself.

The two most important hints are the interactive gui and the usage of a scripting language in the prototype phase. Most programmers are focussed on high-end programming languages which are compiled and are running fast. For example C++, C# or Java (has a bytecode compiler, is sometimes faster than C++). The idea is, that C++ is the best programming language so it’s a great choice for all stages of the development process. What these programmers are ignore is, that the workflow contains of at least two steps: prototyping and programming. In the programming step, C++ makes sense because it’s indeed the fastest programming language. But in the prototyping step, other tools like LibreOffice Calc, Python, AutoIt and GIMP are better suited. The idea is to create the app twice. First, for testing out the design and playing with software itself and then create the production ready software which is compiled and provides the maximum speed.

Perhaps so words about the interactive GUI. The best way in improving existing software is use it in an interactive mode. Most games are programmed with a gameloop, and during the development process there is no way in interrupting the game. The better is to design a command line from the beginning. A command line works similar to the Linux shell, except that the user can enter commands for the current project. He defines some words, for example to show variables and change settings, and while the game is running he can enter these words without pausing the game. This helps to improve the edit-compile run cycle which is well in Python as default but gets better if the developer becomes feedback from his own app. In contrast, a compilicated pulldown menu is not needed during the development process.

Does object-oriented programming make sense?

The upraising of so called object-oriented programming language began with Window oriented operating systems. The new thing was, that not the programming language itself was important but their ability to produce a graphical user interface. Let us take a look at a classical C program. C contains of files, and in the files are statements written which are compiled into machine code. They design principle behind the C language is driven by the needs of a computer. That means, c was invented with performance in mind. The question is: is it possible to write with c larger programs? From a machine oriented perspective this question is not important. It wasn’t asked in the beginning instead the top priority is to optimize the machine code. But in reality, it is very important for programmers to think about larger programs, because otherwise they are not able to produce any software at all.

If a C programmer creates larger project he is splitting his works over many files. In file1 he writes all the subroutines for the graphics engine, in file2 all the functions for the physics engine and in file3 he writes the GUI interface. Is it possible to produce software with that style but in the following example i can show, that there are some disadvantages.

Suppose, the idea is not to write a simple MS-DOS based game, but a GUI application. A gui application contains of so called widgets: buttons, textfields, scrollbars, tabs and menuentries. A naive option is to realize these widgets is the distribution over files. file1 is for all the buttons, file2 contains the textfields, file3 the scrollbars and so on. Another option is to store one widget in one file. That means, the button1 is stored in file1 but not other widgets. That sounds everything a bit complicated, the better approach is object oriented programming. The main feature is, that OOP is suited very well to handle lots of widgets. A typical GUI contains of 20 buttons, 10 scrollbars and 30 menuentries. The total number is 20+10+30=60. It is not practical to create 60 files, and it is also not the best idea to store everything in one file. Instead, object-oriented programming is using a different kind of structure which is called an object. An object is some kind of virtual file which has nothing to do with the needs of a compiler but with the needs of the designer. The result is, that object-oriented languages like Modula-2 are slower and less machine efficient than normal procedural languages. In theory, it is possible to convert any Modula-2 program into plain C code and create machine instructions from the sourcecode. But maintaining C programs which have buttons and other GUI features is a demanding task.

What is a button?

In the object-oriented paradigm, a button has a clear definition. It is an object which contains attributes like position, size and label, but it has also methods like paint, onclick and register. But what is a button from the machine perspective? The answer is, that the computer itself, the C compiler and in general a turing-machine doesn’t understand the concept of a button. A cpu provides assembly commands and no button is available from that perspective. That means, from a programming perspective, a button is not there.

What is programming?

On the first look this question is simple to answer. But the question is a bit tricky because the term programming is often used in historical context of writing software. In the decade of 1980s software programming was equal to create things like operating system, spreadsheet applications and games which can be started on an Intel 386 computer. Nowadays it make sense to define programming with a detailed slightly different meaning. Programming can be divided into programming itself and creation of an prototype. Programming itself was done in the 1980s with assembly language and in the 1990s with the C language and with the C++ language. It was equal to open a texteditor, put the sourcecode in it and compile the sourcecode into executable code. Often the next step was to bundle the compiled binary into a software package which can be distributed with one of the major operating systems like Linux and Microsoft Windows. To shorten the explanation a bit we can say, that the best programming language is C++, because it allows to write the fastest binary code. All comparisons are showing, that sourcecode written in C++ is translated to super-fast binary files which is often faster then handcoded Assembly language. Other languages like Turbo Pascal, Perl or Java are smaller than C++ so they can’t recommended for programming.

C++ is only the best language for the programming task itself, not for the previous prototyping step. This step isn’t mentioned in classical programming books, the reason is that small easy to write programs doesn’t need a prototype. If somebody creates a prime-number-generator he can write the sourcecode in the C++ syntax, compile the code into a binary file and after some minor bugfixes he gets his application. Also many games in 1990s were created without any prototype. Instead the programmer typed in the sourcecode direct into the editor in the C++ syntax and that was the overall project. In modern software engineering which are more complex, programming alone is not enough. In most software projects, the C++ syntax is not the problem. The reason is, that the programmer already knows what a for loop is, how to control the graphics card, or how to use multiple C++ classes. The problem are located somewhere else, notably in the prototyping step of the workflow.

Let us construct a standard case, to define exactly what core programming is. As input, a working prototype is available. This mockup gets translated into the C++ sourcecode. Then the binary file is executed on the computer. Programming means, to convert the prototype into executable code which runs fast under mainstream operating system. This produces an interesting question: who decides how the prototype will look like? Which programming language is used to create a prototype?

The interesting answer is that in the year 2018 both questions are remaining unanswered. The subject of software prototyping is not researched very well. Most authors have the assumption that only core programming itself needs to be understand. If we are focus on prototyping many advice from classical programming will become obsolete. The dominant C++ language is no longer needed. C++ is a bad choice for creating prototypes. The better choice is Python in combination with a painting program like gimp, a documentation tool which can produce pdf documents, a mindmapping tool and scripting tools like matlab. None of these prototyping tools is able to generate fast binary code which is comparable to C++ speed. But that was never the intention. A prototype is not created as delivery but as internal communication tool for the software engineering team.

The perhaps most widespread used prototyping tool is Microsoft Excel. This spreadsheet application is used worldwide at the office workplace to create tables which contains formulas. A MS-Excel file can’t be executed, it needs an environment to get started. Promoting a MS-Excel file as a high-quality application doesn’t make much sense. In comparison to software created with C++, MS-Excel is very slow, didn’t provide a GUI and only a reduced amount of information can be presented. But the comparison wasn’t fair. Because MS-Excel is used for prototyping purpose while C++ compiler are the best practice method in creating a standalone application.

What i want to express is, that modern software development can save a lot of time, if the prototyping step gets more attention. The idea is to separate software engineering into a difficult subtask which is prototyping and an easy subtask which is core programming. Easy means, that the translation step from a prototype to an executable application is well understood. That means, an average programmer is able to take the prototype and create for this prototype the C++ software which can be started on all systems available. He will not recognize any major problems, all what he will find are smaller problems which can be treated as simple Stackoverflow questions. One of these subproblems is for example how to create in C++ a GUI, or how to create in C++ a for loop.

The more demanding task in software engineering is the prototyping step. That means to define what the GUI will look like and which algorithm are needed for the software. This step can’t be answered by programmers, it is located somewhere else. Prototyping is the translation from customer demands into a mockup. For example, the customer needs a prime number generator and the prototype provides the blueprint. It contains the important algorithm, a simple GUI and a bit documentation. In a software engineering project, that prototyping step takes 90% of the overall time while the programming step needs 10%.

Shortage of prototypes For defining the concept of software prototypes much better it’s important to say something about situations in which the prototype shows disadvantages. Suppose, we have created with Python, LibreOffice Calc, PHP and some powerpoint slides a prototype. It is not a standalone app, but a folder of some files who have to be started after each other and are not ready for production. The Python scripts are slow, will run only on the developer machine, while the libreoffice spreadsheats contains only some calculations but are not intended as final program. Everything is wrong with such a prototype folder, he has a weak performance, Python and PHP scripts are using too much cpu power. Everybody who is arguing, that switching the programming language into C++ is needed is right. But is the design department who has created the prototype with the mentioned technology is wrong at the same time? No, they have done everything right. Because a prototype is allowed to run slowly, and has the freedom to be buggy. These issues can be overcome in the next step, which is called the transition from the prototype to a final application. If Python and PHP would be great choices in production environment they would have replaced C++. But for production server they don’t.

Is Java dead?

To answer this question we must go back in the 1990s and investigate what the idea behind Java was. The assumption was, that C++ is hard to learn because it supports pointers and multiple inheritance and the assumption was, that C++ is not a cross plattform language. Java was introduced as an answer to both problems. The main feature of Java is, that it doesn’t supports pointers, have no multiple inheritance and that the java compiler is controlled by a single company which was SUN in the 1990s and Oracle in the now.

The question can be reformulated into “Do we need an alternative to C++?”. According to Microsoft the alternative is called C#. According to the Linux community the alternative to C++ is plain C. And here we get the reason why Java is maybe dead. If the millions of Microsoft programmers are preferring C# over Java, if the beginners in computers courses are learning Python and if the Linux users have a need for C/C++ but not for Java, where is the target group of people who should learn Java?

I’m very pessimistic what the future of Java will be. Right now, Java can’t be called a dead language. Because the number of published books about the subject is high, and the Java tag at Stackoverflow has around 1.5 million questions. But from a technical perspective the problem is, that Java not really solve a major problem. The idea that software should run on multiple operating systems was common in the 1990s before the upraising of the Internet. Today, most software is written for the internet as a server application. This is done with PHP, ASP.NET or C++. Sure we can use Java for backend server programming too, and many business applications are doing so. That means, Java is used as a better form of PHP. But the more elegant way is to use either C# or C++.

The main reason why I’m skeptical about Java is, that the Microsoft as the largest software company doesn’t support the language. Instead they have their own virtual machine language which is promoted as the successor to C++. That means, Java is in a weaker position on Microsoft Server systems.

On the other hand, Red Hat is known for a strong Java support. Java is sometimes called an open source version of C#. Because the sources are available as open source. So in theory, it makes sense to use for server programming on RHEL clusters. But there are many reasons who are speaking against Java, especially in the Linux world. At first, Java is not C. Writing a library in Java but not in C doesn’t make much sense. Secondly, Java is slower than C. And compared with C++, Java is weaker, because it is a managed language which is hiding most advanced features from the programmer.

I’m not sure, if Java is dying right now. Many people are supporting the language, and according to the TIOBE index, Java is more often used than C++. So it is hard to guess what the future will bring.

The operating system is the virtual machine

In the history of computing there were some attempts to simplify the programming. An early example is the p-code syntax around the first Turbo Pascal implementations. P-Code and the later developed Java virtual machine is a runtime environment which abstracts from hardware. The idea is to write a p-code virtual machine for a new cpu, and then the Pascal compiler converts high-level sourcecode into p-code which can be run in the p-code VM.

What is wrong with this approach, why was p-code and Java not successful? The problem is, that writing a Virtual machine is a demanding task. Because the Virtual machine is equal to an operating system. Now it is easy to explain what UNIX is. UNIX is some kind of virtual machine to execute the high-level language C. The hello world example written in C is converted into binary code, and the binary code is executed by the operating system, in some kind of virtual environment. The idea is to port the UNIX kernel to any cpu architecture.

It make sense to compare the Turbo pascal p-code VM with the unix environment. The difference is, that the p-code VM is not portable. That means, it doesn’t contain a complete operating system but needs addtional drivers. The same is true for the Java environment. But why it is not possible to write a small Virtual machine? In the example of the Java Runtime edition this can be explained easily. At first we have a simple hello world program written in Java. It uses the SWING GUI to paint some lines on the screen. The hello world source is only 100 lines of code long. But if we want to execute the file on real hardware a huge number of virtual machines, libraries and low level hardware drivers are needed. It is not enough to interpret only the Java sourcecode, because a command like “open a window with Swing” needs additional binary code which is precompiled in external libraries. That means, to run the simple hello World program we need a Java runtime VM which contains around 1 GB and additional an operating system like Windows 10 which contains also 5 GB. In the sum, the environment to run a small Java program is around 6 GB.

The problem with Virtual machines is, that the problem of an operating system is ignored. In theory a virtual machine is a nice thing. The lua engine or the Python engine are very short programs which can be implemented efficient. But, the VM needs an operating system to communicate with the hardware. That means, without Windows 10, the Lua VM doesn’t work.

In the area of microcontroller programmer there were invented minimalistic operating systems which work as a virtual machine and as operating system as the same time. The problem is, that the number of features is low. If we increase the complexity for example if the aim is to run Java on a smartphone or on a desktop PC, there is a need for a full blown operating system which takes at least 5 gb for storage space.

Let us describe the problem from a different side: what kind of environment is needed to execute a hello world program written in C? In the early history of the UNIX operating system only a 100 kb assembly file was needed. That was a minimal Unix system which was able to execute a C program on the PDP11 computer. But later iterations increases the operating system. Today a standard Linux installation needs around 10 GB on harddisc space to hold all the drivers and libraries. The question is: can we reduce the complexity of operating system, so that future Linux version needs only 100 kb? No, it is not possible. a short look into real operating systems like Mac OS X or Windows show the trend, that all of them contains million of codelines. The problem is, that the operating system provides lots of features. In theory, a computer doesn’t need an operating system. For example the C-64 runs quite well without any kind of OS. Instead the programmer uses the assembly language to communicate with the hardware. The problem is, that this kind of programming style isn’t very powerful. It is very complicated to write new software for the C-64.