How to realize a scientific Artificial Intelligence project

Intelligent machines are without any doubt the future, and that a scientific project is needed to understand AI in detail is also clear. But how exactly can an Artificial Intelligence project be made in reality? What are the keysteps in booting up a research laboratory? This will be answered in the following tutorial.

At first, it is important to know that a science project has to do with buying something. The desired goods can be hardware, software and ideas. But let us make a small example. Suppose, the idea is to start a deeplearning research project. The first step is, that the lab needs some infrastructure:

• fast internet connection

• Workstation PC (5000 US$ each), router hardware

• 3d printer, beamer for conferences, haptic devices for input

• motion tracking hardware, Mocap hardware

• nvidia deeplearning cluster, cloud based server for running a wiki

To get in touch with such technology is simple, if enough money is available. Most of the devices are available in normal computer retail stores, others (for example deeplearning hardware and mocap tracking markers) are available in special store. It sounds surprising, but at least 10% of a research project is about finding the right hardware, and decide which of them has to be bought.

Suppose, all the hardware is there and the lab has around 1 million US$ less money. What’s next? Also the step 2 has to do with buying something. Buying another hardware makes no sense, because if everything is available then on some point it makes no sense to buy the 100th workstation. The next buying decision has to do with scientific ideas. Usually, each of the devices contains a manual, and also the AI literature provides some instructions. The question is now, how to use the equipment in the right way. For example it is possible to install Windows 10 on the workstation PCs or a Linux distribution. It is possible to focus on LSTM neural network or on Convolutional networks. It is possible to investigate research topic A or B. From a certain point of view, this can also be called a buying decision, because the scientists has to decide to get in touch with a certain idea and stay away from another one.

Perhaps one example. It is possible to use a out-of-the-box deeplearning software like Tensorflow or program everything from scratch. The question is not, how to use Tensorflow or how to program with C/C++, the question goes only about the decision itself. That means, to compare what is better and what the researcher want’s. Usually such decisions are made after reading lots of information and discussion the market with other experts. This takes also a lot of time.

Suppose, the research lab has bought computer hardware, and has bought some key ideas. The next step is to do write down a first description of the project. That means, to talk about the own buying decision and communicate it to the outside. In most cases, the people who gave the money are interested in such kind of feedback, because they want to know, for what the 1 million US$ was spend and about which topic in detail, the deeplearning project is about. Until now, the project took around 80% of the overall time. That means, the setup and buying decision need most of the time. With the remaining 20% of the time, the researcher can try to realize their own ideas. In most cases, they will fail, but that isn’t a problem, if the failure is documented well. Writing down, that the own attempt to boot up the workstation and make something useful with Tensorflow wasn’t successful is in case of doubt a real science project. For the beginners that’s sound crazy, because nobody has done real research. All what the project was about is to buy thinks, recognize what’s wrong with them and write down, that’s unclear what the problem is. But, that is the essence of a real AI project. If such a project took 12 months and costs 10 million US$ it can be called a great one and a template for copying the principle.

The most important discovery is, that apart from buying something, there is no way to make science. If somebody is on the standpoint, that he do not buy any product out there, and don’t want to buy any idea available he is not a scientists. Apart from “going shopping” there is nothing what a scientist can do. The only difference to the normal understanding of shopping is, that the products are different. Buying new shoes in a deeplearning project is out of the scope, except it is about image recognition for an online store. Scientific research is modern form of go hunting. At first, the prey is circled and then the meat gets slaughtered. Communication with the outside and other researchers is important because this will increase the success probability. Apart from this archaic ritual there is no other possibility to make science. If somebody isn’t interested in this social game, he isn’t part of the scientific community.

Sometimes, tigers are called hunters but they can be called customers too. They searching for food like humans go to the supermarket and search what’s inside deep cooling rack. And scientists are doing basically the same. Most of the time they are hanging around in computer stores, in libraries and on conferences, because they are searching for intellectual food. That the currency they are focussed on.

Youtube has a dedicated category under the name “shopping haul”. That are videos about people who are visiting a store and putting everything in the basket what is really good. Usually a haul takes time, because it is not possible to buy everything, so a decision has to be made. A haul can took place in many locations: in a computer store, in a supermarket, in a clothing store and so on. For the case of a science, the only difference is the store. Which has mostly to do with an academic book store or a computer store. If the scientists is a good one, he will take some time for doing a carefully decision and he can explain the details.

Advertisements

Artificial Intelligence in Films

Hollywood has produced movies about any subject with only one exception: Artificial Intelligence is a non-topic in movies. To understand the problem in detail, let us first investigate one of the few example and discuss if this is really Artificial Intelligence.

The most prominent example is the film Wargames (1983) in which an AI becomes self-aware while playing TicTacToe. The technology was introduced to the audiance with a library scene in which the protagonist tried out to do some research for finding out about which topic a former scientists has worked. With knowing the real history it is easy to guess which real person is portrayed here: Claude Shannon. But the total amount which has to do with AI research is very limited.

The next guess for AI in films are blockbusters like Ex Machina (2015), A.I. Artificial Intelligence (2001) and Startrek TNG (1987-1994) which have all a plot around a robot which represents Artificial Intelligence. But in contrast to Wargames, this isn’t grounded in real AI history, it is more a hollywood version of machine intelligence and the audience gets no details how the problems were solved on a technical basis. The same problem is happening in I, Robot (2004), in which the plot is about robots with a “positronic brain”. This has nothing to do with AI as a technology, but is about the ethical consequences of AI. That means, AI is introduced as invented without telling the details, and then potential dangers and advantages for the society gets analyzed.

Other movies or tv-shows with a more scientific background simply doesn’t exist. That means, hollywood thinks that the audiance is not interesting in the topic. That is surprising because other subjects like crime investigation or the work in a hospital is very well portrayed by the entertainment industry. That means, if somebody has watched some episodes of Emergency Room (1994-2009) he is able to work in a real hospital.

So why is Artificial Intelligence a non-topic in cinema? We don’t know. I would guess it is a combination of an audience who is not interested in getting the details combined with screenwriters who are not familiar with expert systems, LISP, Forth and neural networks. We can say, that Artificial Intelligence in films is some kind of taboo. The only aspect which is outspread in boring details is the former called “ethical consequences of AI” for example in the TV-show Westworld (2016-) which describes in over 20 episodes who robots are living together with humans. But with Artificial Intelligence this has nothing to do. Like in Startrek or in Exmachina, no realistic details are given, instead AI is explained with a positronic wonder brain, and only the consequences of this technology are explained.

Is perhaps the topic of programming a computer to play games too hard for a naive audience? Are they able to handle the truth? It seems, that the film production industry thinks that describing AI realistically is a mistake, so they have figured out how to prevent it. From a technical perspective it is very easy to make movies a bit more realistic. Some keywords from real research paper are enough even they are used in the wrong context. But it seems, that there is no need for detailed description of important technology, especially not in the trivial science-fiction genre.

daily AI answers

https://ai.stackexchange.com/questions/6267/what-maths-are-required-to-study-general-ai

Artificial general intelligence is not science, it is indoctrination. The idea is not to invent something, for example a robot, or to solve open problems. It is not possible to fail a project. Instead, the idea behind AGI is the [curriculum](http://goertzel.org/agi-curriculum/) itself. That means, it is not possible to bring AGI forward or debate about open question, instead the student can attend the courses and answer the question which are given in the material. From a certain point of view, AGI and mathematics are the same. Like math courses at the university, AGI is a teaching-only discipline. That means, who is good in Algebra, Stochastic and algorithm theory will get best grade in AGI too.

The math in AGI consists of information theory, linear algebra and combinatorics. A famous book is Douglas Hofstadter: “Gödel, Escher, Bach”, which is a general reflection about math and the human brain. It contains chapters about self-referencing systems, chaos theory and everything else which is meaningless.

Do’s and dont’s in asking questions on AI.stackexchange?

A while ago i posted my first question on AI.stackexchange. In the meantime, some people have decided to upvote my profile so that according to the numbers I’m an expert for the forum. I want to take the role seriously and give some advice of how to posting the right questions (and answers).

The minimum length of a good post is around 3 paragraphs. It is ok if it’s more, but less is a bit difficult. If the question is only short and can be asked in one sentence it is a good idea to fill the gap with some kind of introduction so that the community gets a better picture of the background. Another pitfall is the right usage of the upvote-/downvote button. From a technical point of view the button can be used in two directions, for giving a positive or a negative feedback. The main problem in the forum is not, that the quality is too low, and the user must vote carefully to reduce the number of participants, the main problem is, that traffic in the forum is to low, and if somebody gets a downvote he might be discouraged to post again. So my advice is: never, ever downvote a post. Even the question is dump or the answer makes no sense, we are pressing the upvote button. Upvote means only, that we have read the posting and want that the user is posting more content. Upvoting is not a quality judgment.

The next point is the right choice of the tags. On the website are many tags available for example, machine-learning, game-ai or neural network. The best practice method is, to chose only one tag. This helps to avoid the cross-posting phenomena because the posting is categorized exactly one times.

At last some words about possible topics. According to the name, everything which has to do with AI is ontopic. That can be neural networks (which is a very famous topic with lots of postings), but can be also symbolic AI (GOFAI) or behavior based robotics. What is offtopic in the forum are pure hardware related questions like “What is the correct voltage for the arduino board” or “Which servo motor has the highest torque?”. Such question fit better to https://robotics.stackexchange.com/ which is a neighbor website. Also offtopic are pure programming question like “How to initialize the SFML game engine in C++?”, such question are welcome on the mainstream Stackoverflow forum. They even have a dedicated tag for [SFML] which has right now around 2200 entries.

One thing is special on the https://ai.stackexchange.com/ website. The overall traffic is low and the number of users too. The result is, that around 10 postings are enough to break some kind of record. That means, even newbies will enter the highscore list of the most active users, without any serious activities. Currently the top ranked user in the forum has posted around 50 answers. According to the normal Stackoverflow forum this is very low amount. The problem is, that the topic Artificial Intelligence gains very little traction worldwide. On the other hand, the forum has grown in the last year a bit, so there is hope that the situation will be better next year.

Why should a newbie post anything on AI.stackexchange? Because the visibility at Google is high. That means, after some minutes the question is searchable with the google engine and get the maximum reach worldwide. And in most cases, Stackexchange question are ranked an place #1, that means Google will send a lot of traffic to the content which results into immediate attention from the Internet.

Case-based reasoning as main technique for implementing AI?

In the hope to improve “Learning from demonstration”, I found a new term in Artificial Intelligence: Case based reasoning. As far as i understand from the documentation, the idea is, that an human expert demonstrates an task and from that demonstration a database is created. If the robot must solve the task for its own he request the database. So LfD and CBR have much in common. The difference is, that LfD lets the retrieval process open while CBR is the more general approach.

But let us simplify the general idea. At first, we are programming a system in which RC-car2 must follow RC-car1. That means, car 1 is controlled by a human-operator, and car2 works like a linefollower robot, which means it is a simple tracking controller.

In the second step, the system is modified a bit. Both cars are controlled by humans. That means, car1 is driven by a human expert, and car2 too. The second car has the same goal like before, he should follow the other car. The question is now, how to write the tracking controller for car2? And here comes case-based reasoning into the game. The idea is to store the gameplay into a .csv file. That means, we get a list of positions of car1 and car2, so it is visible who the human expert2 is reacting to car1. This .csv file must converted into a case database. This is used for generate the tracking controller.

How big is the Artificial Intelligence market?

Surprisingly small. According to different sources https://www.futuresplatform.com/blog/5-countries-leading-way-ai-artificial-intelligence-machine-learning around 10 billion US$ per year worldwide are invested into so called AI-startups. That are companies how are researching Artificial Intelligence and robotics. If the salary of an employee in the sector is around 50000 US$ per year, than the spending is equal to keep 200k people running. Most of the employees are working in the US, but some of them in China and Europe. A deeper look into the list of AI-startup companies shows us, that most of them are classical tech-companies which are programming software and call themself AI-company because it sounds great. Under the assumption that the market grows in future it remains a minority of employees who are involved in Artificial Intelligence research.

Limits of neural networks

Sometimes, neural networks and deeplearning is recognized as voodoo magic. In the following blog post I want to bring a bit clearness into the game. At first it is important to ask what is nessary to tranform a neural network into a truing-capable machine. The simplest form of implementing a computer is not a turing-machine itself, but a more basic structure, called logicgates. Logicgates are used in a truthtable and can only have AND, OR, NOT operation. The interesting aspect of logicgates is, that the can do everything what a turing machine can do.

On Youtube there is a video available which shows a primenumber generator build from logicgates only. I would guess, that the diagram was created with a compiler from a highlevel C-program. Most important, the shown logicgate diagram is very huge. There at least 40 logicgates visible, perhaps more.

The reason why I’m explaining the logicgate is because logicgates can be trained like a neural network. They can be seen as a real neural turing machine. Not as an external tape, which is used together with a neural network, and not as an LSTM network, but as a truing-capable computer. So what is necessary to train a logicgate network? I have absolutely no idea but i posted it as a question to stackoverflow https://ai.stackexchange.com/questions/5460/training-of-a-logicgate-network

I would guess, that it is simply not possible to train a logicgate network. In theory perhaps, but in reality the state space is too huge. So we have a truing-capable neural network but no idea how to adjust the weight. And this gives us the answer what the limits of neural networks are. A normal 3 layer neural network is not truing-capapable. Larger deeplearning networks which are based on LSTM neurons and deepmind NTM are perhaps turing-ready, but they are not more powerful than a logicgate network. The problem is, that for normal neural networks, LSTM machines, logicgates or whetever kind of apparatus no efficient learning algorithm is known. That is the real bottleneck.

Learning algorithm like the delta rule, backpropagation or quickprop as not very elegant form of searching in the error diagram for a minimum. They are not able to find a minimal solution. And they won’t even find a simple weight combination for calculating prime numbers. Using a neural network as a turing-machine is a deadend.

But i wouldn’t call the deeplearning movement in general a fail. Only that part of the community who is trying to move neural networks in the direction of a computing device is a failed-project. Even with the fastest nvidia cards this is not possible. Another aspect of deeplearning is in contrast very attractive. That is called big data and means to use neural networks for storing images.

What is the difference? A neural network can be seen twofold: at first as a turing-like device with the aim to search for a program for converting input into output. That is equal to a neural turing machine or to a logicgate network. The other option is to see a neural network as a propabalistic database. Here is the aim to store for example, 10 gb of images and than search the database for similarity. That is a technology that works. It means, that it is possible to use it practical.

I think the deeplearning community should resign from the idea, that their 20 layer network is some kind of trainable computer. For realizing even a simple primenumber algoirthm a way more neurons (logicgates are needed) and training such computer is not possible with current hardware. But what makes sense is to see a neural network as similarity search algorithm for retrieving images. The simplest form is to store 1 Mio dog photos as compressed jpeg file on harddrive, and use a convolutional filter for searching if a given image is similar to one of the photos. If yes, we can label the image with the name “dog”.

This special topic is currently not very well researched and it makes sense to investigate it deeper. It think we should use other vocabulary to make clear what we are doing. Instead of talking about “training a neural network” the aim is to build a similarity search algorithm which has access to an image database.

Neural turing machines

In some papers the so called “Neural turing machine” is presented. Often, these papers are very complicated and contains much mathematical formulas. In reality, a neural turing machine is simply a device which has logicgates and the exact configuration is driven by a learning algorithm. This kind of neural network is called “McCulloch Pitts neuron” and it is truing-capable. It is a trainable turing-machine. But it can’t be used for any purpose. Because it is unclear how to train the logicgates, that means to decide if the gate #23 is an AND gate and if yes which other two neurons are the input signal.

The reason why today no complex logicgates are used in computers, but instead the von-neuman-architecture has to do because on a von-neuman-machine a program can be executed on a tape. And creating such program can be done with higher language. Such programs can be also converted into logicgates, but that is equal to build a different computer for a different algorithm, which is not very useful.

How deeplearning works
Deeplearning has nothing to do with neural networks. Instead the algorithm can be described as a similarity search with Convolutional Neural Networks. That means, the CNN is used as a metric to determine if two images are equal. And the input image is compared with a database of known images. The accuracy is higher, if the database is bigger.

The misunderstanding is, that most tutorials suggests that a neural networks works like a computer, and after training a certain program is found. But in reality, there is a huge difference between a turing-machine and a neural network. Instead it makes sense, to call deeplearning a sort of filter-generation for determine if two images are equal.

Perhaps a small example who OCR works in reality. At first, we need a database of .svg files. The filesize should be 10 gigabyte or more. In that database every kind of characters are stored from every possible font. And now we take a new .svg file. We are searching inside the database for a similar image. The similarity index is calculated with a Convolutional Neural Network, the request goes very fast. If we found an image, we know that on the picture is the character “w”, for example. With deeplearning or neural networks this has nothing to do. Also not with a logicgate or with a neural turing machine. Instead the accuracy depends on two factors:

– size of the database
– similarity filter

It is false to store image data into a neural network. The images can be stored in a normal database. And it also false to search for an algorithm. Instead a given algorithm is used to generate the image filter and this results into the optical character recognition.