### Can a vector space based learning model discover inductive class
generalization in a symbolic environment?

by

#### Lev Goldfarb, John Abela, Virendra C. Bhavsar and Vithal N. Kamat

#### Abstract

We outline a general framework for inductive learning based on the recently
proposed *evolving transformation system* model. Mathematical foundations
of this framework include *two basic components*: *a set of operations
*(on objects) and the corresponding *geometry *defined by means of
these operations. According to the framework, to perform inductive learning in a
symbolic environment, the set of operations (class features) may need to be
dynamically updated, and this requires that the geometric component allows for
an *evolving *topology. In symbolic systems, as defined in this paper, the
geometric component allows for a dynamic change in topology, whereas
finite-dimensional numeric systems (vector spaces) can essentially have only one
natural topology. This fact should form the basis of a complete formal proof
that, in a symbolic setting, the vector space based models, e.g. artificial
neural networks, cannot capture inductive generalization. Since the presented
argument indicates that the symbolic learning process is more powerful than the
numeric process, it appears that only the former should be properly called an
*inductive learning *process.

goldfarb@unb.ca

last updated: 95/12/22