# Preprints and working papers

### On the rank of a random binary matrix with independently chosen rows of constant weight

In preparation.

### A linearized DPLL calculus with clause learning

October 2007, updated January 2010.

Many formal descriptions of DPLL-based SAT algorithms either do not include all essential proof techniques applied by modern SAT solvers or depend on particular heuristics or data structures. This complicates the analysis of proof-theoretic properties or the search complexity of these algorithms. In this paper we try to improve this situation by developing a nondeterministic proof calculus that models the functioning of SAT algorithms based on the DPLL calculus extended with clause learning. This calculus is independent of implementation details yet precise enough to enable a formal analysis of realistic DPLL-based SAT algorithms.

- University of Potsdam Preprint Archive (March 2009).

# Conference papers

### Codegenerierung für anwendungsspezifische Prozessoren mit evolutionären Algorithmen

(*Code generation for application-specific processors using evolutionary algorithms*.)

*Informatiktage 2001*, 213–215, Konradin-Verlag, November 2001. Also presented at the *Fachbereichstag Informatik*, October 2002.

This paper describes an approach to code generation for application-specific processors with instruction-level parallelism and complex instructions. Examples of such architectures are digital signal processors (DSPs), multimedia processors with dedicated instructions sets for the processing of audio or video data, and other application-specific instruction-set processors (ASIPs). Our method is based on two coupled evolutionary algorithms, one of them performing instruction selection, and the other optimizing register allocation and instruction scheduling. Both optimization processes work solely on a graph representation of the input program.

Download PDF (Only available in German.)

# Research reports

### Mapping IEC 1131-3 to the ECMA-335 Common Language Infrastructure and the Real-Time Operating System QNX

(with Uwe Petermann) Research report, Leipzig University of Applied Sciences, March 2003, revised February 2005.

In this report we summarize the results and experiences of the *CommonCtrl* project. Its aim was to integrate a process control system based on the PLC programming language IEC 1131-3 with a dynamic object-oriented environment based on the ECMA-335 Common Language Infrastructure running on the real-time operating system QNX. This project was motivated by the demand to combine the standardized programming language IEC 1131-3, which is familiar to many engineers and has proved to be a solid foundation for process control systems, with a platform providing a large class library, dynamically loadable components, and interoperability to popular operating systems and communication standards.

### Mapping IEC 1131-3 to the ECMA-335 Common Language Infrastructure and the Real-Time Operating System QNX

(with Uwe Petermann) Research report, BitCtrl Systems, January 2003.

This report contains the results of the *CommonCtrl* project in full detail.

(Not publicly available.)

# Theses

### A Recurrent Neural Network Model for Pattern Recognition

Master’s thesis, Max-Planck-Institute for Mathematics in the Sciences and Leipzig University of Applied Sciences, December 2003.

Many neural models of the visual system of mammals are essentially feed-forward models that either do not incorporate recurrent connections at all or use them only as a secondary device. This contradicts neuroanatomical facts and the results of numerous neuropsychological experiments, which give evidence that recurrent connections play a major role in the processing of visual input. In my Master’s thesis I developed a recurrent neural network model to account for these results. This model is based on the idea that, particularly in the presence of noisy data, it is beneficial not to evaluate the whole input at once, but only those features whose evaluation maximizes the expected information gain at a given point in time. The network employs the backward connections to select the features to be evaluated, which makes the recurrences in the network a central control element and is in better accordance with biological findings than traditional models.

### Codegenerierung für variable Zielarchitekturen mit evolutionären Algorithmen

(*Retargetable Code Generation Using Evolutionary Algorithms*.)

Diploma thesis, Leipzig University of Applied Sciences, October 2001. In 2002, my thesis won the Award of the Fachbereichstag Informatik as an outstanding diploma thesis in technical computer science.

Hardware/software codesign, the integrated design of hardware and software components of a system, is an important paradigm for the design of complex embedded systems. An essential element of every codesign system is a code generator that is able to generate code for different target architectures, where the generated code exploits special features of the respective architecture, such as instruction parallelism and complex instructions. Classical heuristic algorithms for code generation are often not suitable for this task because these features cannot be adequately mapped to their processor model. In my Diploma thesis I therefore developed a new generic code generation method, based on evolutionary algorithms, which is able to automatically generate optimized code for different target architectures.

Download PDF (Only available in German.)

# Notes

### Kolmogorov Complexity and the Incompressibility Method

October 2011.

The notion of Kolmogorov complexity yields a simple yet powerful proof technique, called the incompressibility method. The purpose of this paper is to explain the concepts on which this technique is based and, along the way, to provide a concise introduction to Kolmogorov complexity theory. The only prerequisite is a basic understanding of computable functions.

### Die Unentscheidbarkeit extensionaler Eigenschaften von Turingmaschinen: der Satz von Rice

(*The undecidability of extensional properties of Turing machines: Rice’s theorem*.)

July 2008, updated January 2010.

This introductory text covers the technique of proving a language undecidable or non-enumerable using the notion of mapping reducibility (or many-one reducibility). It explains how this technique can be used to reduce the acceptance problem of Turing machines—the *universal language*—to any nontrivial set of Turing machine descriptions or to the complement of this set. This result, which is known as Rice’s theorem, shows that every nontrivial extensional property of Turing machines (or in other words, every nontrivial set of partial computable functions) is undecidable. Rice’s theorem implies that it is impossible to construct a program that automatically verifies whether an arbitrary program exhibits a certain extensional behavior.

Download PDF (Only available in German.)

### Information maximization in a network of linear neurons

May 2005.

It is known since the work of Hubel and Wiesel that many neurons in the lower visual areas of the brains of mammals have characteristic response properties. Some cells, for example, are sensitive to light-dark contrasts, whereas other cells are sensitive to edges of a certain orientation. In a series of papers Linsker showed experimentally that similar response properties can be developed in a simple feed-forward network of linear neurons by using a Hebbian learning rule, even without any structured input to the network. Later he showed theoretically that the self-organization process leading to these receptive field structures is consistent with the principle of maximum information preservation in the transfer from stimulus to response. This text contains an introduction to Linsker’s analysis.

### Einige Untersuchungen zu einem Neuronenmodell auf der Basis von dendritischen Spine-Clustern

(*Some investigations on a neuron model based on dendritic spine clusters*.)

June 2002.

Common abstract neuron models consider neurons as (more or less simple) integrate-and-fire units whose excitation is computed as a weighted sum of its inputs. This ignores that the input paths of natural neurons actually consist of dendritic trees with a very elaborate structure. Simply summing the inputs arising at the dendrites disregards the complex interactions between the dendritic synapses. In this paper we investigate a neuron model for associative learning by Alkon, Blackwell, and Vogl. This model is derived from the assumption that the ability of natural neurons to associate incoming signals, and therefore their ability to form learning systems, is based on interactions between spatially neighboring synapses.

Download PDF (Only available in German.)