Pattern representation and the future of pattern recognition: 

A program for action

 

ICPR 2004 satellite workshop

St Catharine’s College, Cambridge, UK, Aug 22, 2004

 

 

In the preface of their 1974 book “Pattern Recognition” Vapnik and Chervonenkis wrote (our translation from Russian):

 

     . . . To construct the theory [of pattern recognition] above all a formal scheme must be found into which one can embed the problem of pattern recognition. This is what turned out to be difficult to accomplish.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
     In essence, different points of view on the formulation of the pattern recognition problem are determined by the answer to the question: are there any general principles adequate for describing pattern classes of various nature, or, in each case, is the development of the corresponding [pattern] description language a problem for specialists in that concrete area?

     If the former is true, then the discovery of these principles must form the main research direction in pattern recognition [our italics]. [It would be the] main direction, since it would be general and principally new. 

     Otherwise, the pattern recognition problem is reduced to the problem of an average risk minimization for a special class of decision rules, and can be considered to be a direction in applied statistics.

    The answer to the above question has not been found, which is why the choice of the problem formulation has been, so far, a question of faith. The majority of researchers, however, have adopted the second point of view, and the theory of pattern recognition is now understood as a theory of risk minimization for a special class of decision rules.

     In this book we also follow that view . . . .

 

At the end of the book, they mention:

 

     . . . It is interesting to note that a meaningful formulation of the pattern recognition problem appeared in 1957-58, and a formal formulation only in 1962-66. These five-to-eight years between a meaningful and a formal formulation were extremely bright years, the years of the “pattern recognition romantics”. In those days it appeared that the pattern recognition problem carried within itself the beginnings of some new idea, which was in no way based on the system of old concepts; researchers wanted to find new formulations, not to reduce the problem to already known mathematical schemes. In this sense the reduction of the pattern recognition problem to the scheme of average risk minimization rouses some disappointment. True, there are attempts to understand the problem in a more complex formulation . . . . As yet, however, such attempts are extremely rare.

 

These sentiments were shared, at that time, by many other leading specialists in pattern recognition. Now, after 30 years, it is only natural to ask again whether the role of “representational” formalisms, i.e. formalisms dealing with pattern representation, has been adequately understood in pattern recognition. The answer quite clearly is “no”, since the above attitude continues to prevail. 

 

Of course, this is not to say that there have been no sustained attempts to overcome this  prevailing attitude. During the ‘70s and ‘80s, the newly emerged syntactic/structural subfield of pattern recognition was imbued with a central role of non-numeric forms of pattern representation. It was also widely expected that the future of the field would be intimately connected with the “integration” of the structural and the classical (vector space based) approaches. Why hasn’t this vision materialized yet? Have the prevailing approaches to “structural pattern recognition” been adequate?

 

As we see it, at this pivotal stage in the development of the field, we must become more rather than less judgmental about emerging pattern recognition frameworks. In contrast to the past, we must expect from these frameworks more explanation of the nature of intelligent information processing. Accordingly, we must (tentatively) decide which concepts are central/fundamental to such frameworks. It should be well understood that the investment of substantial resources into the development of new techniques in older, “comfortable”, but fundamentally inadequate formal frameworks would, in retrospect, be considered irrational. We often forget that the “big picture” history of science is amnesic of inconsequential developments, independent of the amount of effort devoted to them at the time. Science, more than any other human undertaking, is about the future.

 

How do we know that the “older, comfortable, . . . formal frameworks” are “fundamentally inadequate”? Mainly because all of us have been engaged in wishful thinking: we have long closed our eyes to the state of affairs in which the knowledge gained as a result of learning (under the current formal frameworks) falls far short of the longstanding “reputation” of inductive learning as being the central intelligent process. Under numeric models, for example, less is known about the structure of a “learned” object than about the structure of a training object, simply because we don’t know how to “generate” the former. (In general, how much knowledge of the object’s structure can be gained by “drawing” decision surfaces in a Euclidean space?) Thus, somehow it turns out that “learning” hardly improves the information about structure of objects in the learned class.  Of course, the more practical consequences of this situation manifest in the inappropriate brittleness of both the learning process and its results.

 

Among the fundamental concepts and issues in pattern recognition that have appeared on the horizon recently, perhaps the most central is the concept of an (inductive) class representation that is, roughly speaking, both generative and inductively meaningful. Generativity, of course, implies the capability to “generate” objects from the class based on the class representation. Inductive meaningfulness means that a class representation must be efficiently learnable from a “small” training set and must also be stable with respect to various kinds of “legitimate noise” present in the object representation. As to the “reality” of class representation, it is quite possible that it emerges simultaneously with the first objects in the class (i.e. the first time they are being formed). We note that this concept of class representation cannot be fully introduced in any of the popular formalisms in view of their internal formal/structural limitations. In the numeric framework, the class representation is not generative, and in the formal grammar framework (including graph grammar), the concept of class representation is not inductively meaningful. The latter is not really surprising considering that it is a computational (i.e. logical) rather than representational formalism. 

 

Is it possible to construct a desirable inductive formalism by modifying existing ones? We are quite skeptical about such research directions and we plan to discuss the reasons why at the workshop. Thus, as hinted above by Vapnik and Chervonenkis and as one might have expected (based on the extraordinary status of induction in science and philosophy), for the first time in the history of science, a radically new, representational, formalism is required to facilitate the development of inductive informatics. In particular, within such a formalism the class and object representations are much “closer” than in known formalisms.  In contrast, a string (considered as an example of a form of object representation) does not carry within itself enough representational information to allow one to link it reliably (during learning) with the corresponding grammar, i.e. to identify the class to which it belongs.

 

 

 

In this workshop, we plan to

 

·              briefly overview the situation in pattern recognition over the last 30 years

·              discuss the most fundamental issue to be resolved within the emerging pattern recognition (or machine learning) frameworks, i.e. the tight integration of  forms of object and class representations within a formalism 

·              focus on the monumental task associated with the resolution of the above issue: the development of a first (non-numeric) representational formalism in science  (introductory paper in the proceedings)

·              discuss why the sole reliance on the conventional error-rate-based evaluation methodology is inadequate

·              (new research directions) discuss the emerging formalisms for structural representation, including the evolving transformation system (ETS) model, and their potential impact on, and applications to, pattern recognition and the closely related fields of data mining, information retrieval, bioinformatics, cheminformatics (as well as science in general).

 

 

 

 

           Workshop Chair:  Lev Goldfarb

 

           Workshop organizing committee:

 

         David Gay

         dave.gay@unb.ca

         Faculty of Computer Science

         UNB, Fredericton

         Canada

 

 

        Lev Goldfarb

        goldfarb@unb.ca

        Faculty of Computer Science

        UNB, Fredericton

        Canada

        http://www.cs.unb.ca/~goldfarb

 

         Oleg Golubitsky

  oleg.golubitsky@unb.ca

  Postdoctoral Fellow
  Computer Algebra Group
  Department of Mathematics
 
University of Pisa, Italy

 

        Thore Graepel

        thoreg@microsoft.com

        Machine Learning and Perception Group

        Microsoft Research Ltd

        Cambridge, U.K.

        http://research.microsoft.com/~thoreg

 

  Dmitry Korkin

  korkin@salilab.org

         Postdoctoral Associate

         Andrej Sali Lab

         Department of Biopharmaceutical Sciences

  University of California, San Francisco

  http://www.cs.unb.ca/~dima

 Jose Ruiz-Shulcloper

 recpat@icmf.inf.cu
 
Director
 
Research Center
on Pattern Recognition
 
Ministerio de la Industria Básica (MINBAS)
 
Havana, Cuba

 

 

 

 

Program

 

 

8:00     9:00

Registration

9:00     9:05

Welcome

9:05   10:15

Pattern representation and the future of pattern recognition  (Lev Goldfarb, presentation only)

10:1511:05

The ETS intelligent process: a provisional sketch  (Oleg Golubitsky, presentation only)

11:0511:30    

Coffee break

11:3012:10 

ETS representation of fairy tales  (Sean M. Falconer, David Gay, Lev Goldfarb)

12:1012:50

The dissimilarity representation, a basis for domain-based pattern recognition?  (Robert P.W. Duin, Pavel Paclík, Elżbieta Pękalska, David M.J. Tax)

12:5014:00

Lunch

14:0014:40

On the articulatory representation of speech within the ETS formalism  (Alexander Gutkin, David Gay,

Lev Goldfarb, Mirjam Wester)

14:4015:20

Turing-completeness of additive transformations in the ETS formalism  (Oleg Golubitsky)

15:2015:40

Coffee break

15:4017:30

Open discussion on the topic of the workshop: the role of pattern representation in pattern recognition

17:3017:40

Concluding remarks