This is the home page for the Workshop on Constructive Induction and Change of Representation. The workshop was held on July 10, 1994 in conjunction with ML-COLT '94 (a joint conference of Machine Learning and Computational Learning Theory). All of the papers in the working notes are available from this page.
If you have comments or questions on this document, please contact email@example.com.
The past five years have seen a significant increase in the amount of work in this area. Some methods developed have been able to effect increases in classification accuracy. Others are able to derive features similar to those discovered previously by humans. Still other systems have demonstrated impressive performance improvement through the construction of new representations.
Many issues still remain in the field of constructive induction. We do not understand the range and limitations of current methods, or the kind of representation change that real-world domains may require. The objective of this workshop is to serve as a forum for the presentation of recent work, as well as a forum in which these issues can be discussed.
Click on a paper title to retrieve a Postscript copy of the entire paper. The papers are compressed; your client must be able to uncompress files.
Two of the papers in this workshop deal with molecular biology. A good introduction to this topic for machine learning people is Craven and Shavlik's paper Machine Learning Approaches to Gene Recognition.
John Aronis and Foster Provost
Most existing inductive learning systems form concept descriptions in propositional languages from vectors of basic features. However, many concepts are characterized by the relationships of individual examples to general domain knowledge. We describe a system that constructs relational terms efficiently to augment the description language of standard inductive systems. In our approach, examples and domain knowledge are combined into an inheritance network, and a form of spreading activation is used to find relevant relational terms. Since there is an equivalence between inheritance networks and relational databases, this yields a method for exploring tables in the database and finding relevant relationships among data to characterize concepts. We also describe the implementation of a prototype system on the CM-2 parallel computer and some experiments with large data sets.
Mark W. Craven and Jude W. Shavlik
The ability of an inductive learning system to find a good solution to a given problem is dependent upon the representation used for the features of the problem. A number of factors, including training-set size and the ability of the learning algorithm to perform constructive induction, can mediate the effect of an input representation on the accuracy of a learned concept description. We present experiments that evaluate the effect of input representation on generalization performance for the real-world problem of finding genes in DNA. Our experiments demonstrate that: (1) two different input representations for this task result in significantly different generalization performance for both neural networks and decisions trees; and (2) both neural and symbolic methods for constructive induction fail to bridge the gap between these two representations. We believe that this real-world domain provides an interesting challenge problem for the machine learning subfield of constructive induction because the relationship between the two representations is well known, and because conceptually, the representational shift involved in constructing the better representation should not be too imposing.
Nathalie Japkowicz and Haym Hirsh
This paper describes a "bootstrapping" approach to the engineering of appropriate training-data representations for inductive learning. The central idea is to begin with an initial set of human-created features and then generate additional features that have syntactic forms that are similar to the human-engineered features. More specifically, we describe a process for the engineering of good representations for learning that takes as input a hand generated set of features that seems to help learning, and "bootstraps" off of these features by developing and applying operators that generate new features that look syntactically like the human-generated features. Our experiments in the domain of DNA sequence identification show that an initial successful human-engineered representation for data can be expanded in this fashion to yield dramatically improved results for learning. Although our approach is currently manual, we believe that it is expandable into a constructive induction method.
Methods for constructive induction perform automatic transformations of description spaces if representational shortcomings deteriorate the quality of learning. In the context of concept learning and propositional representation languages, feature construction algorithms have been developed in order to improve the accuracy and to decrease the complexity of hypotheses. Particularly, so-called hypothesis-driven constructive induction (HCI) algorithms construct new attributes based upon the analysis of induced hypotheses. A new method for constructive induction, CN2-MCI, is described that applies a single, new constructive induction operator (Ok) in the usual HCI framework to achieve a more fine-grained analysis of decision rules. Ok uses a cluster algorithm to map selected features into a new binary feature. Given training examples as input, CN2-MCI computes an inductive hypothesis expressed in terms of the transformed representation. Although this paper presents work in progress, early empirical findings suggest that even a single, simple constructive induction operator is able to improve induction considerably. Furthermore, preliminary experiments indicate that most of the obtained features are intelligible. Nevertheless, these results have to be confirmed by further experiments, especially by applications to real-world domains.
We describe CiPF 2.0, a propositional constructive learner which is able to cope with both noise and representation mismatch in training examples simultaneously. CiPF 2.0's abilities stem from coupling the robust selective learner C4.5 (and its production rule generator) with a sophisticated constructive induction component. An important new general constructive induction operator incorporated into CiPF 2.0 is the simplified Kramer operator which abstracts combinations of two attributes into a single new boolean attribute. The so-called Minimum Description Length (MDL) principle acts as a powerful control heuristic guiding the search in the possibly vast representation space.
T.D. Ross, M.J. Noviskey, M.L. Axtell, D.A. Gadd and J.A. Goldman
This paper offers a perspective on features and pattern finding in general. This perspective is based on a robust complexity measure called Decomposed Function Cardinality. A function decomposition algorithm for minimizing this complexity measure and finding the associated features is outlined. Results from experiments with this algorithm are also summarized.
Vladimir N. Sazonov and Janusz Wnek
With most machine learning methods, if the given knowledge representation space is inadequate then the learning process will fail. This is also true with methods using neural networks as the form of the representation space. To overcome this limitation, an automatic construction method for a neural network is proposed. This paper describes the BP-HCI method for a hypothesis-driven constructive induction in a neural network trained by the backpropagation algorithm. The method searches for a better representation space by analyzing the hypotheses generated in each step of an iterative learning process. The method was applied to ten problems, which include, in particular, exclusive-or, MONK2, parity-6BIT and inverse parity-6BIT problems. All problems were successfully solved with the same initial set of parameters; the extension of representation space was no more than necessary extension for each problem.
Janusz Wnek and Ryszard S. Michalski
This paper addresses a class of learning problems that require a construction of descriptions that combine both M-of-N rules and traditional Disjunctive Normal Form (DNF) rules. The presented method learns such descriptions, which we call conditional M-of-N rules, using the hypothesis-driven constructive induction approach. In this approach, the representation space is modified according to patterns discovered in the iteratively generated hypotheses. The need for the M-of-N rules is detected by observing "exclusive-or" or "equivalence" patterns in the hypotheses. These patterns indicate symmetry relations among pairs of attributes. Symmetrical attributes are combined into maximal symmetry classes. For each symmetry class, the method constructs a "counting attribute" that adds a new dimension to the representation space. The search for hypothesis in iteratively modified representation spaces is done by the standard AQ inductive rule learning algorithm. It is shown that the proposed method is capable of solving problems that would be very difficult to tackle by any of the traditional symbolic learning methods.
Takefumi Yamazaki and Michael Pazzani
This paper presents a method for learning a semantic hierarchy. The semantic hierarchy is used to bias the learning of rules for translating Japanese verbs to English. The creation of a semantic hierarchy is a form of constructive induction induction, since the nodes in the hierarchy may be regarded as new predicates that enable more accurate translation rules to be learned. Our experimental results show that a semantic hierarchy learned by our method is useful in learning translation rules for Japanese verbs.