How to bring Aimbots forward

In classical Aimbot forums, the AutoIt language is used routinely for creating so called Aimbots. The reason is, that AutoIt is an interpretive language with an integrated pixelsearch feature, which makes it easy for accessing an external program. But there are also tutorials available which have C# as the programming language, and perhaps C# is better suited if the program gets complexer. At first we should define what exactly an Aimbot is.

It would call it a C++ program which is using pixelsearch as input and sends keystrokes as output for interacting with existing games. The important things about Aimbot is, that they are beginner friendly. In contrast to theoretical AI and also in contrast to classical robotics competions like Micromouse or Robocup, an Aimbot can be created really by amateurs. That are people who have apart from one week experience with programming no further technical skills. And the setting in which Aimbots are created usually is very relaxed which results in a creative atmosphere.

But let us go into problems. At first, it is difficult in the Linux operating system to realize a pixelsearch like function. It has something to do with the Wayland interface and foremost with the small number of people who are using Linux for gaming, so the problem of a missing pixelsearch feature is not escalating in support forums. In contrast under the MS-Windows ooperating system there are many more users who have the need for using a pixelsearch like function, either in AutoIt, in C# or in C++. But without a pixelsearch feature it is not possible to access the rawdata of an existing game. So the user is in the position to create his own game from scratch, which increases the difficulty without doubt. For example the Mario AI game was programmed first by the community only for the reason to program after it an Aimbot. But using a pixelsearch like interface will make the workflow more flexible because the user can decide by his own which game he wants to play with an AI. User1 likes Tetris, user2 prefers Sokoban, user3 a shot’em’up and so forth.

In my opinion a pixelsearch feature plus the keystroke sending is the minimal requirement for beeing part of the Aimbot community. After it, we can talk in detail if C#, C++ or Autoit is the better programming language. So the first thing to get Aimbot programming more popular is to realize a pixelsearch interface for Linux.


Increasing the productivity of programming languages


A look into computerhistory shows us, that today programming paradigma are not totally different from what was done 40 years before. But one thing has changed dramatically, it is called disintegration and means, that programming languages, operating systems, libraries and companies are working against each other. In the 1970s, the operating system was equal to the programming language. That means, Pascal was both, and IDE, bytecode and a language. Todays developers have invented for each purpose a special tool. That means, we have different programming languages like C++, Java, Lisp and on the other hand we have different operating system. So it is possible to combine a certain language with a certain OS and use a different library.

I want to give a short example. The new C++20 standard is currently under planning. C++20 is equal to major improvements in the language. The funny thing is, that most programmers are not aware of the changes, because they need no better C++. But that is not a problem which has effects to the development of C++20, because the language itself needs improvement. The new thing is, that every part is developed independently. That means, after implementing the C++20 standard, it will make problems with the operating system, libraries written before and programmers who use it. In the 1970s this would result into failure of the project, nowadays on subject in computing industry is driven forward even it will results into negative effects somewhere else.

The reason why the disintegration is possible is simply: better communication over the internet. If we are reading todays online forum, many users are talking about the effect, and this will guide them to bypassing the situation. What does that means? It means, that the community consists of units which are working against each other in a battlefield like environment. Or to say it more colloquial: the C++20 standard was only developed with the goal, to make the life for programmers harder.

But how exactly was such a integrative system possible? Why is the development spread out over many instances? The answer is, that “working together” brings no profit. In theory, it would be possible that the C++ community arranges the changes with other stakeholders, for example with companies like Microsoft, with operating system developers like Red Hat or even ask the programmers if they really need the new feature. But in reality, the C++20 standard is established without asking anybody for comment. It is a one-man-show which follows its own rules. In theory, it would be possible that somebody from outside criticize the development, because “Concepts” (which is part of C++20) is not needed by no-one. But this statement isn’t recognized. The C++ team will ignore it.

Working against each other

The usual assumption is, that teamwork is important because this multiply the strength, right? The problem with teamwork is, that at first a team is needed, which is an organisation with a shared identity by it’s member. What the common attitude of an operating system programmer, and the creator of a computer language? Right, there is no overlap, they have different goals in their mind. The first one is interested in an easy to use system kernel, while the second one is interested in a clean language definition. The better approach is, to let both teams working against each other. That means, the new invented programming language make system development impossible, while the new operating system is using the language in the wrong way. Instead of promoting to trust each other, the dominant feeling is paranoia, what the opposite group is doing next. The demand for communication increases and flame-wars will become normal. A flame-war is no longer a factual discussion form, but escalating the conflict on a personal level. That is the overall direction, what has changed since the 1970s. In old-school computing everything worked together on the same project, while todays programmers have unlearned cooperation.

The new talent is called gamification. That means, to see the developing of a project as an individual task to gaining points. And increasing the count is only possible, if somebody other loose them. If the C++ language team has done all fine, the Operating system division will hate them.

Why git is great

Why git is used very often in software development is not obvious for the beginner. The core functionality of the software is to execute every hour the “git commit” command. Any other useful feature are not visible. So what’s the deal? The main feature is, what the programmer can do, with the incremental generated git-repository. For example, he can do a fulltextsearch over the history with “git grep” for searching for a certain pattern. Or he can manually scroll back to an old version, which works better than the current one. But perhaps the most impressive feature is the gitstats tool which uses the “.git” folder for generating lots of reports. Such information are not possible without git. The project manager can see for example, which programming languages are used in project, who many lines of code were added each day, what the individual programmer has done. For example, my own project contains currently of 67% of all lines in the C++ language, 26% in Python and only 1% in Forth.

At the beginning it was a bit complicated to use git right. I can recommend the following workflow. At first, a temp/ directory is used, which is ignored by git. Here is the place for doing some programming without any git versionhistory like in a normal non-git repository too. If something useful was written a normal file is created in the active directory which is under control of git. If the file is no longer relevant it will get copied to the old/ folder. So the general idea is to store the sourcecode in separate fiile and not using the same file and overwriting the content. So git is used as a simple backup tool.

Not very surprising is the productivity of a software project very small. Not more than 10 lines of code per day are possible. The number is constant over huge time periods and is equal in all programming languages. There is not magic trick out there to improve the productivity, the only option what project managers has is to increase the numbers of programmers. That means, one programmer can generate 10 LoC/day, and 8 programmers can create 80 LoC/day.

The reason why the productivity is so low can be explained twofold:

1. not every day, new codelines are added. Instead on some day nothing happens

2. the average programmer doesn’t know how to program in C, Perl, Java, or C++. Most of his code what he writes can be thrown away. Instead the software project is used, to learn coding itself. That is not only true for untalented programmer, but it is the rule which is always correct.

A bit surprising was, that in practice the idea of branching and merging can be ignored. That is surprising because most git tutorials are explaining this aspect intense with the idea, that making a branch and bringing code from developers together is the main feature in git. It is not, or the other possibility is, that I didn’t have fully understand what git is ;-) But from my point of view, the core function of git is equal to a backup tool. That means, we have a project folder with subdirectories. The folder is populated with sourcecode and every ten minutes the “git commit” command is executed, which saves all the information into the repository.

And i would go a step further and suggest, that merging and branching has nothing to do with the named git commands, but it is manually workflow. Merging means only, to copy & paste sourcecode from different sources into one file and commit the changes. If a merge error occurs, no magic git command will help, instead the programmer has to fix the problem manually, and write down the comment in the next git commit.

RRT sampling with pointers

Some novice C++ programmer are asking, why pointers are integrated into C++. Usually, pointer are not used in programming, because the normal “.” is better suited. That means, the compiler ooptimizes what the user wants, he can program in C++ like in Python. But there is one task, in which pointers are needed. A concrete example is, that i want to make an RRT solver which is using a physics-engine. That means, to create different node and every node is a Box2D Engine. This is called gametree and has the aim, to figuring out different plans. The reach that goal, the entire class has to copied into a temparary class and in that the actions are executed. The original box2d engine remeins the same, that means after the procedure I have two physics engine in the memory.

int main()
Physics *myphysics = new Physics();

The first prototype is showing here. If this sourcecode will work in reality, I’m not sure. I created two instances with the deep copy command. The compiling works and after pressing run no segmentation fault is shown. Inside the physics-class a normal Box2d world is used. If the code will really work, the next step is to write a wrapper class, so that the game can be uses with the following high-level-commands:

– reset
– update
– copy

The idea is, to create 100s of instances at the same time and let them in different states. It is a classical RRT solver, but only with a physics engine as nodes.

In the introduction I asked why pointers are needed. The answer is, that for such special purposes the pointer feature in C++ is not a bug, but a really useful tool for programming advanced stuff. And i would guess, that there are many more tasks, which can only be solved with pointers. So my greetings to the Java and the Python community who believes, that they are safe …

Bug: Mingw crosscompilation for hello world doesn’t run in wine

Under Fedora 27 I installed the Mingw toolchain for creating Windows 64 bit applications:

dnf install mingw64-gcc-c++ mingw64-gcc

#include <iostream>

int main() {
    std::cout << "Hello world!\n";
    return 0;

After compiling the sourcecode and running with wine, I get the following message:

x86_64-w64-mingw32-g++ hello.cpp
wine a.exe 
002b:err:module:import_dll Library libstdc++-6.dll (which is needed by L"Z:\\tmp\\1\\a.exe") not found
002b:err:module:attach_dlls Importing dlls for L"Z:\\tmp\\1\\a.exe" failed, status c0000135

I think it is not a technical problem, but a question of licensing! The better approach is using Cygwin instead of Mingw.

Bug: Implementing a neural network is hard

Just for fun, I’m trying out to implement my own neural network. A look at github shows me, that I’m not the first guy how is doing so. My first idea was to use the “Eigen” library for make the matrix-multiplication easier. Instead, i decided to do the math in a for loop. Now, the first version is working, but it is a mess. The sourcecode is very complicated. That is surprising, because in theory a neuron is doing only some multiplications and sums the value, but the problem is to get the overall network structure right.

Nevertheless, the first speed test have shown, why neural networks are interesting. If we are putting an input-vector to the neural network, and the sourcecode is working, it is possible to get the output fast. On my pc, i get around 1 million iterations per seconds with the -O2 compiler option. That means, in one second i can run the network 1M times for getting the output value. It is a very small network, with only 2 input values, and one hiddenlayer and it contains no learning of the weight. But the speed itself is amazing. If it is possible to calculate with a neural network a physics engine, this will improve the performance. Only for the purpose of documentation, my programs works with the following idea:

class NN {
  std::vector<float> weight = {};
  float getoutput(int layerid, int neuronid, std::vector<float> layerinput) {
    // layerid: 0=inputlayer, 1=hiddenlayer, 2=outputlayer
    float result=0;
    int offset=0;
    if (layerid==2)
    int weightid=neuronid*networklayout[layerid-1]+offset;
    for (auto i=0;i<layerinput.size();i++)
      result += layerinput[i]*weight[weightid+i]; 
    return result;

The weights are stored in a long std::vector and a output function calculates each neuron. The difficult part is to determine, which weight has to multiplied with with input. My sourcecode is not perfect, because it supports only one hidden layer, but for deeplearning more than one hiddenlayer is needed.

Developers don’t like Linux

According to a survey in Stackoverflow, only 20% of the developers are using Linux for programming their PHP, Java and other applications, In most cases, proprietary software like Windows 10, Mac OS X or Windows 8 is used. The reason for the decision is simple: Linux is missing a good IDE for programming software. The Eclipse Java program has lots of bugs and crashes often, while lightwight editors like Geany are not known in the community. What the developers prefers is Visual Studio in Windows and Xcode on the Mac. That is the way, software for backend and frontend applications is created.

I think, this survey is:

a) the reality
b) shows why the market share on the desktop is low

If programmers who are using C++ all the day are not able or motivated to use Linux as there desktop environment, who should install the Penguin operating system then? The enduser perhaps? That is a joke. Because it makes no sense, if the PHP programmer and the C++ programmer are using WIndows 10 and want that the enduser is using a Linux machine. No, if Linux is not a success for developers, than the operating system is a fail for anything else.

The problem with Linux for developers is, that apart from the opensource communities itself (the Linux Kernel, systemd, Libreoffice and Gnome) no development community is out there which programs normal GUI application or backend-application with a Linux IDE. So in reality we have the inner core of Linux programming like Linus Torvalds, than is a missing room, and then comes the 1 million worldwide GUI developer and application programmer who are using Windows 10.

The room between them is empty. That means, the segment of programmers who are not so advanced that they can program in the Linux kernel directly, but have enough knowledge to program normal GUI applications and backend-software for Linux is missing.

But there is hope: 20% of the developers are using Linux. In most cases they are programming under Ubuntu but some of them are familiar with Fedora. I think, that is a good starting point for further improvement. It think, it is important to make clear, that Linux is a great environment for programming software. It is stable, and all the tools are for free. For example: the C++ language itself, the IDE, but also the gtkmm framework, the git software and the webbrowser for searching for new questions on stackoverflow. An all-opensource desktop for software-development is possible.