Ray Solomonoff vs. Lego Mindstorms

Ray Solomonoff (1926-2009) was a mathematician, computer scientist and Artificial Intelligence visionary His working area was probabilistic theory which includes inductive inference. He was involved in university education and held many lectures at conferences. For the younger generation,

Solomonoff is hard to understand. His mathematics comes from a time before the invention of first computers. He was fascinated by analog computers and most of his work was done on a theoretical base without using any computer. And perhaps the most interesting aspect is, that an alternative to the writing and the thinking of Solomonoff is possible. The opposite is simply called Lego Mindstorms. It has also to do with Mathematics and Artificial Intelligence, but has a different approach. The main idea behind Mindstorms and robotics for student is, that no previous knowledge about stochastic is needed, but a must have to start with the programming is the robot itself and a laptop to programming the device.

I would call the approach of Ray Solomonoff and Lego Mindstorms antagonist. That means, if Solomonoff would go to north, mIndstorms would go to south. After a while they are don’t meet at the same point, instead they are going away from each other. From a perspective of computing history, Solomonoff and his ideas represent the past, while Lego Mindstorms and similar projects are the future. I wouldn’t call Solomonoff’s work wrong, it is simply no longer relevant for current education. That means, Lego Mindstorms is from one perspective the same because it has to do with thinking machines. But on the second look, it is completely new.

But what exactly is the difference between induction theory of Solomonoff and Lego Mindstorms robotics? The answer is, that from the subject of Artificial Intelligence only the part is used which has to do with games and programming, and this is used in the context of robotics. All the other parts, which have to do with understanding of mathematics itself, with greek symbols, literature references and theoretical models is ignored. The funny thing is, that game-based AI works great. A game is some kind of sandbox in which newbies and experts can try out what they want and everything which is too complicated is not part of the game. The result are not only games, but easy to understand games. And this allows non-mathematicians and non-scientists to become familiar with advanced technology.


AI discussions forums online

In the internet many AI-related discussions forums are available:

https://robotics.stackexchange.com/questions (4200 questions)

https://ai.stackexchange.com/questions (2200 questions)

https://stackoverflow.com/questions/tagged/artificial-intelligence (5000 questions)

https://stats.stackexchange.com/questions/tagged/neural-networks (4000 questions)

All of the forums are part of the Stackexchange network, but the questions are distributed between many forums. The main one is SE.AI, but also the other forums are providing lots of content. In the Stackoverflow forum itself, AI is mostly downvoted as offtopic. The problem is, that AI topics are using lots of academic background knowledge and it is rare that a simple code snippet can answer the question. In the Crossvalidated Forum (stats.stackexchange) many AI related questions are available. It is some kind of mainstream machine learning forum. The tag ML has 11000 questions, neural network has 4000 and deep learning has 1800 questions.

Overall i would estimate that in the Stackexchange networks around 30k-50k questions are available. That is smaller than the 16 million questions from stackoverflow itself, but is a huge amount of content.

“Talk Allocation is entirely solvable without NLP”?

During a discussion about a multi-agent system a major question was asked. Is it possible to handle task allocation without Natural language programming?”, https://ai.stackexchange.com/questions/7907/what-method-and-tools-should-i-use-for-ai-that-suggests-assigns-a-person-for-a-t

The idea is, that Milind Tambe described in his papers a task allocation problem which is located on top of a signaling game. A signaling game is used in game-theory for describing a communication game between different actors. In the SE:AI thread the problem couldn’t discussed until end, because the OP was about a slightly different problem.

Neural networks as a teaching tool to increase the audience

From a technical perspective neural networks are not very powerful. They describe a computer, which doesn’t run a program but determines it’s output with neuron weights, which have to be trained by algorithm. The problem with all neural networks is, that the state space of the weights is huge and even advanced algorithm like backpropagation combined with modern hardware based neural networks are not able to search the complete state space. And yes, every neural network can be replaced by normal computer code written in C++.

But this rant against neural network ignores the potential of reducing the entry barrier in understanding artificial intelligence. The interesting and surprising fact about neural networks is, that most people like them very much. Paper which have a title like “neural networks for image recognition” gets a broad audience and in discussion forums neural networks are heavily on topic. Perhaps because they need no program and can find the answer by itself? What the most people want is some kind of Artificial Intelligence light, which is about robotics and intelligent machines but in a way, they understand easily and didn’t need hundred lines of code. Neural networks are satisfying this need. They have the image of a wheel of fortune and random generator, which means that we only have to press the start button and the recurrent neural networks acts alone.

From a technological perspective, neural networks are a dead end. They are not able to compete with normal computer programs. But what if we convert a classical problem of Artificial Intelligence into a neural network problem? Modelling a domain with a neural network is always possible and it forces the programmer to reduce the original problem. Perhaps it makes no sense to implement neural networks on a production machine, but as a learning tool to educate people in Artificial Intelligence they are great. What we have seen in the last 4 years on Arxiv is, that any problem of artificial Intelligence like image recognition, pathplanning, robot control and speech recognition was converted into a neural network issue. Not because neural networks are able to reduce the error rate or give better results then a handcoded solution, but because it helps to lower the entry barrier for beginners in Artificial Intelligence. If speech recognition is reduced to a training corpus and a 3 layer perceptron, everybody will understand what the idea is.

Neural networks are some kind of Python for Artificial Intelligence. Python doesn’t bring programming forward, and Python is less powerful then C++, but the programming language helps to teach programming to beginners and non-experts.

How to realize a scientific Artificial Intelligence project

Intelligent machines are without any doubt the future, and that a scientific project is needed to understand AI in detail is also clear. But how exactly can an Artificial Intelligence project be made in reality? What are the keysteps in booting up a research laboratory? This will be answered in the following tutorial.

At first, it is important to know that a science project has to do with buying something. The desired goods can be hardware, software and ideas. But let us make a small example. Suppose, the idea is to start a deeplearning research project. The first step is, that the lab needs some infrastructure:

• fast internet connection

• Workstation PC (5000 US$ each), router hardware

• 3d printer, beamer for conferences, haptic devices for input

• motion tracking hardware, Mocap hardware

• nvidia deeplearning cluster, cloud based server for running a wiki

To get in touch with such technology is simple, if enough money is available. Most of the devices are available in normal computer retail stores, others (for example deeplearning hardware and mocap tracking markers) are available in special store. It sounds surprising, but at least 10% of a research project is about finding the right hardware, and decide which of them has to be bought.

Suppose, all the hardware is there and the lab has around 1 million US$ less money. What’s next? Also the step 2 has to do with buying something. Buying another hardware makes no sense, because if everything is available then on some point it makes no sense to buy the 100th workstation. The next buying decision has to do with scientific ideas. Usually, each of the devices contains a manual, and also the AI literature provides some instructions. The question is now, how to use the equipment in the right way. For example it is possible to install Windows 10 on the workstation PCs or a Linux distribution. It is possible to focus on LSTM neural network or on Convolutional networks. It is possible to investigate research topic A or B. From a certain point of view, this can also be called a buying decision, because the scientists has to decide to get in touch with a certain idea and stay away from another one.

Perhaps one example. It is possible to use a out-of-the-box deeplearning software like Tensorflow or program everything from scratch. The question is not, how to use Tensorflow or how to program with C/C++, the question goes only about the decision itself. That means, to compare what is better and what the researcher want’s. Usually such decisions are made after reading lots of information and discussion the market with other experts. This takes also a lot of time.

Suppose, the research lab has bought computer hardware, and has bought some key ideas. The next step is to do write down a first description of the project. That means, to talk about the own buying decision and communicate it to the outside. In most cases, the people who gave the money are interested in such kind of feedback, because they want to know, for what the 1 million US$ was spend and about which topic in detail, the deeplearning project is about. Until now, the project took around 80% of the overall time. That means, the setup and buying decision need most of the time. With the remaining 20% of the time, the researcher can try to realize their own ideas. In most cases, they will fail, but that isn’t a problem, if the failure is documented well. Writing down, that the own attempt to boot up the workstation and make something useful with Tensorflow wasn’t successful is in case of doubt a real science project. For the beginners that’s sound crazy, because nobody has done real research. All what the project was about is to buy thinks, recognize what’s wrong with them and write down, that’s unclear what the problem is. But, that is the essence of a real AI project. If such a project took 12 months and costs 10 million US$ it can be called a great one and a template for copying the principle.

The most important discovery is, that apart from buying something, there is no way to make science. If somebody is on the standpoint, that he do not buy any product out there, and don’t want to buy any idea available he is not a scientists. Apart from “going shopping” there is nothing what a scientist can do. The only difference to the normal understanding of shopping is, that the products are different. Buying new shoes in a deeplearning project is out of the scope, except it is about image recognition for an online store. Scientific research is modern form of go hunting. At first, the prey is circled and then the meat gets slaughtered. Communication with the outside and other researchers is important because this will increase the success probability. Apart from this archaic ritual there is no other possibility to make science. If somebody isn’t interested in this social game, he isn’t part of the scientific community.

Sometimes, tigers are called hunters but they can be called customers too. They searching for food like humans go to the supermarket and search what’s inside deep cooling rack. And scientists are doing basically the same. Most of the time they are hanging around in computer stores, in libraries and on conferences, because they are searching for intellectual food. That the currency they are focussed on.

Youtube has a dedicated category under the name “shopping haul”. That are videos about people who are visiting a store and putting everything in the basket what is really good. Usually a haul takes time, because it is not possible to buy everything, so a decision has to be made. A haul can took place in many locations: in a computer store, in a supermarket, in a clothing store and so on. For the case of a science, the only difference is the store. Which has mostly to do with an academic book store or a computer store. If the scientists is a good one, he will take some time for doing a carefully decision and he can explain the details.

Artificial Intelligence in Films

Hollywood has produced movies about any subject with only one exception: Artificial Intelligence is a non-topic in movies. To understand the problem in detail, let us first investigate one of the few example and discuss if this is really Artificial Intelligence.

The most prominent example is the film Wargames (1983) in which an AI becomes self-aware while playing TicTacToe. The technology was introduced to the audiance with a library scene in which the protagonist tried out to do some research for finding out about which topic a former scientists has worked. With knowing the real history it is easy to guess which real person is portrayed here: Claude Shannon. But the total amount which has to do with AI research is very limited.

The next guess for AI in films are blockbusters like Ex Machina (2015), A.I. Artificial Intelligence (2001) and Startrek TNG (1987-1994) which have all a plot around a robot which represents Artificial Intelligence. But in contrast to Wargames, this isn’t grounded in real AI history, it is more a hollywood version of machine intelligence and the audience gets no details how the problems were solved on a technical basis. The same problem is happening in I, Robot (2004), in which the plot is about robots with a “positronic brain”. This has nothing to do with AI as a technology, but is about the ethical consequences of AI. That means, AI is introduced as invented without telling the details, and then potential dangers and advantages for the society gets analyzed.

Other movies or tv-shows with a more scientific background simply doesn’t exist. That means, hollywood thinks that the audiance is not interesting in the topic. That is surprising because other subjects like crime investigation or the work in a hospital is very well portrayed by the entertainment industry. That means, if somebody has watched some episodes of Emergency Room (1994-2009) he is able to work in a real hospital.

So why is Artificial Intelligence a non-topic in cinema? We don’t know. I would guess it is a combination of an audience who is not interested in getting the details combined with screenwriters who are not familiar with expert systems, LISP, Forth and neural networks. We can say, that Artificial Intelligence in films is some kind of taboo. The only aspect which is outspread in boring details is the former called “ethical consequences of AI” for example in the TV-show Westworld (2016-) which describes in over 20 episodes who robots are living together with humans. But with Artificial Intelligence this has nothing to do. Like in Startrek or in Exmachina, no realistic details are given, instead AI is explained with a positronic wonder brain, and only the consequences of this technology are explained.

Is perhaps the topic of programming a computer to play games too hard for a naive audience? Are they able to handle the truth? It seems, that the film production industry thinks that describing AI realistically is a mistake, so they have figured out how to prevent it. From a technical perspective it is very easy to make movies a bit more realistic. Some keywords from real research paper are enough even they are used in the wrong context. But it seems, that there is no need for detailed description of important technology, especially not in the trivial science-fiction genre.

daily AI answers


Artificial general intelligence is not science, it is indoctrination. The idea is not to invent something, for example a robot, or to solve open problems. It is not possible to fail a project. Instead, the idea behind AGI is the [curriculum](http://goertzel.org/agi-curriculum/) itself. That means, it is not possible to bring AGI forward or debate about open question, instead the student can attend the courses and answer the question which are given in the material. From a certain point of view, AGI and mathematics are the same. Like math courses at the university, AGI is a teaching-only discipline. That means, who is good in Algebra, Stochastic and algorithm theory will get best grade in AGI too.

The math in AGI consists of information theory, linear algebra and combinatorics. A famous book is Douglas Hofstadter: “Gödel, Escher, Bach”, which is a general reflection about math and the human brain. It contains chapters about self-referencing systems, chaos theory and everything else which is meaningless.