Purposeful Unprimed Real-worldRobot with Predictors UsingShort Segments

PURR-PUSS was invented in two stages, first PUSS in 1972 and then PURR in 1974.
PUSS was a predictor designed for my earlier STeLLA learning machine, but it was so successful that a multiple PUSS was devised to be the core of a new learning machine, named PURR-PUSS. STeLLA was built out of relays by Peter L. Joyce at the Standard Telecommunication Laboratories, U.K., in 1962-3, but the reliability of relay technology proved inadequate.

The development of PURR-PUSS is described in the
    "Man-Machine Studies" reports UC-DSE/1(1972) to UC-DSE/40(1991),
    Progress Reports to the Defence Scientific Establishment (New Zealand),
    ed. J.H.Andreae, ISSN 0110 1188.
These reports have been deposited in the Library of Congress, Washington D.C., USA; the National Technical Information Service, Virginia, USA; the British Lending Library, Boston Spa, UK; and many university libraries around the world.

Think of the PURR-PUSS 'brain' as a cortical sheet with a number of labelled areas, which we will here call Cortical Areas. Each Cortical Area has bundles of inputs from different sensory processors or from other Cortical Areas; it has an output bundle carrying an action or stimulus prediction or an input to another Cortical Area. [A Cortical Area is referred to as a template in John Andreae's book Associative Learning for a Robot Intelligence.]

The connections to and from a Cortical Area determine what rules it can learn, each rule being of the form:

        IF condition (= context), THEN output.

        or, in greater detail,

        IF these input events (one from each input bundle) occur together,
        THEN this event of the output bundle is likely to follow.


 The input events can be delayed versions of stimulus or action events, so a rule condition is a spatio-temporal sample of event space. A rule is an association of the output event with the input events. Rules can represent dynamic behaviour in multi-dimensional space. If the rule condition is satisfied, then the output will either contribute to the selection of an action or it will contribute to the prediction of the next stimulus. When the memory for rules fills up, the rules that haven't been used for the longest time are discarded. Needless to say, the collection of rules is in a constant flux of change. As long as this change is being caused by the interaction of the robot with the real, open, unpredictable world, this collection of rules executed by a low-level program remains open-ended, nonalgorithmic and therefore not a fixed program. This is how PURR-PUSS escapes the limitations of a fixed program or algorithm, as expounded by Searle in his Chinese Room or by Lucas and Penrose in their Gödel Theorem arguments.


 The primary motivation for the activity of PURR-PUSS is novelty. Whenever a new rule is stored it is marked as a novelty goaland a goal-seeking process called leakback computes expectations for rules leading to this and other goals in memory. The same leakback process directs PURR-PUSS towards reward goals and away from disapproval (punishment). Leakback requires parallel processing more than any other part of PURR-PUSS and it cannot be simulated efficiently by serial computers. Novelty goals are set by PURR-PUSS itself and justify the claim that PURR-PUSS sets its own goals and has free will. This claim was made back in 1977 in John Andreae's first book Thinking with the Teachable Machine.

John Andreae can be contacted at: John Andreae

All cat line drawings © Gillian M Andreae