DAVID JOHN BLOWER, 70, passed away suddenly on February 26, 2018. David was born May 12, 1947 in Akron, Ohio, the eldest child of Basil and Jean Blower. He graduated from Ohio State University with a degree in Psychology. He then enlisted in the US Army, spending nearly three years in Berlin, where he met his wife, Elizabeth Claude Blower. Following his honorable discharge from the Army, David continued his academic studies at Stanford, where he received his PhD in Experimental Psychology. He then joined the US Navy, serving at NAMRL Pensacola, FL, NADC Pennsylvania, and Naval Research Center at the University of Central Florida, Orlando. He returned to NAMRL in 1988, retiring honorably from military service in 1994.

David spent his retirement years writing and publishing four books on Information Processing, the Principle of Maximum Entropy, and Bayes Theorem.


You can download the books below

Information Processing: Boolean Algebra, Classical Logic, Cellular Automata, and Probability Manipulation

Volume I was corrected and revised in September 2017. This book begins the task of defining, explaining, arguing for, and, in the end, providing a rationale for information processing. Volume I is concerned with the notion that an information processor is mainly engaged in making inferences. An information processor would prefer to reach definite conclusions by using some form of deduction. Unfortunately, it is often thwarted in this desire by a fundamental lack of relevant information. Probability theory has developed as a rigorous way of dealing with the uncertainty surrounding inference. Thus, we begin by treating some of the formal manipulation rules that crop up in probability theory. To provide some foundational basis for the applications to appear in later Volumes, the topics of Boolean Algebra, Classical Logic, and Cellular Automata will make an appearance here.

Information Processing: The Maximum Entropy Principle

How does an Information Processor assign legitimate numerical values to probabilities? One very powerful method to achieve this goal is through the Maximum Entropy Principle. Let a model insert information into a probability distribution by specifying constraint functions and their averages. Then, maximize the amount of missing information that remains after taking this step. The quantitative measure of the amount of missing information is Shannon’s information entropy. Examples are given showing how the Maximum Entropy Principle assigns numerical values to the probabilities in coin tossing, dice rolling, statistical mechanics , and other inferential scenarios. The Maximum Entropy Principle also eliminates the mystery as to the origin of the mathematical expressions underlying all probability distributions. The MEP derivation for the Gaussian and generalized Cauchy distributions is shown in detail. The MEP is also related to Fisher information and the Kullback-Leibler measure of relative entropy. The initial examples shown are a prelude to a more in-depth discussion of Information Geometry.

Information Processing: An Introduction to Information Geometry

This book attempts to lay down some minimal set of coherent and consistent requirements for Advanced Information Processors (AIPs). The idea is advanced that AIPs should reason in an optimal manner by generalizing logic. Logic is generalized through the auspices of probability theory. Probability is presented through the viewpoint of the formal manipulation rules such as Bayes’s Theorem and the maximum entropy principle. Information Geometry is an alternative way to think about the mathematical characterization of entropy and the assignment of numerical values to probabilities. The approach emphasizes many solved numerical examples and relies upon the computational ability of Mathematica.

Information Processing: Introduction to Artificial Neural Networks

Information Processing, Volume IV is an incomplete volume. The author passed away suddenly on February 26, 2018, while working on both his Information Processing: Supplemental Exercises for Volume I (published May 30, 2019) and Volume IV of his Information Processing series, entitled ‘Introduction to Artificial Neural Networks’ (dated June 2017). He had completed about 150 pages of Volume IV. His friends and collaborators, Dr. Romke Bontekoe and Dr. Barrie Stokes, graciously offered to edit the Volume IV draft, update the Mathematica code, and bring the draft into a form ready to be published. They did not attempt to complete missing chapters or add original content. Some chapters listed in the draft table of contents were represented only by placeholders in the main text, and there was no reasonable way to fill in this missing material. The original ‘Contents’ section was amended accordingly.

A generic label, artificial neural networks (ANNs), has been bestowed on the whole concept of a biologically inspired structure of interconnected nodes and weights that might mimic in computer code how the human brain solves inferential problems. This volume will provide the interested reader a glimpse into some of the author’s ideas concerning ANNs and their application to the general problems of information processing and inference under uncertainty. One common inferential scenario is classification. This volume compares the approach offered by a combination of our preferred Bayesian and MEP attack with the less probabilistically motivated ANNs. It is profitable to compare these approaches since ANNs might offer a way out of the dilemma posed by the dreaded curse of dimensionality.

Now that Mathematica is endowed with powerful new machine learning and ANN-related functions, they can be used to tackle the typical inferential problems addressed previously in these books, and extensive numerical experiments with ANNs are possible. For appropriately simplified cases, the Mathematica ANN results are compared to those provided by Bayesian and MEP techniques.

The Volume IV Mathematica Notebooks and other supplemental information are available from a public repository on github.com.

Information Processing: Supplemental Exercises for Volume I

This book contains an extensive number of fully solved problems in Boolean Algebra, Classical Logic, Cellular Automata, and formal probability manipulations to augment the already existing exercises of Volume I.Thus, the problems solved here are somewhat harder or more involved than those appearing in Volume I. I also indulge in the luxury of taking as much space as I desire to fully explicate some core concepts that could only be touched upon lightly in Volume I.For example, I devote a full eight pages and four separate exercises to explain the combinatorial counting formulas through Feller’s classic problem of computing the probability of the number of accidents during a week.In addition, I spend some time on examining the analogy between de Finetti’s representation theorem and the formal probability manipulation rule that expresses the probability for a statement as a weighted average over all models that make numerical assignments to these statements.As a final typical example, I present many solved exercises that look at the development of the hypergeometric probability distribution from the different perspective proffered by Jaynes and Jeffreys. (Published posthumously. Minor typographical and content errors may remain.)