Some of My Stuff on AI and Neural Networks

What I understand by the term 'AI': Artificial intelligence (circa 1988)

Apart from what it has become synonymous with, artificial intelligence is the development of processes which can be considered intelligent, those that emulate the intelligence of the human brain for example.

Though simple display of behaviour similar to that exhibited by intelligent beings or devices is not necessarily indicative of intelligence, neither is the accurate replication of the mechanism of the human brain logically prerequisite for intelligence. Any apparently intelligent device can be presumed intelligent (unless it can be shown otherwise, beforehand) until it ceases to exhibit intelligent behaviour. As a case in point, intelligence is currently attributed to the human brain - a device of which we have incomplete knowledge and understanding. Thus, we do not necessarily need to understand or define intelligence in order to emulate it. It could happen by chance (as perhaps can be said for human intelligence) that an intelligent device may be developed in ignorance of the processes that constitute it. However, this is not the scientific way.

In academic terms AI is the application of the scientific method to the phenomenon of intelligence: its definition, understanding and production. It embodies the reasoning that in order to produce intelligence one must define it and understand it. That the definition of intelligence may circuitously depend upon the understanding of an intelligent mechanism or that experiments may produce behaviour not fully understood, does not preclude the scientific approach. It is often the case that the accumulation of knowledge is enhanced by an iterative process of hypothesis and experimentation. The point to be emphasised is that our understanding is the primary aim and the pace of its progress will moderate that of our development of intelligent devices.

I am not aware of any conclusive definition of intelligence. There is of course that given in a dictionary, but it deals with vague, higher order abilities attributed to the human mind. It doesn't specify the external behaviour for a device to qualify as intelligent but requires that we be certain of it having inherent abilities such as understanding and reasoning. It is largely a matter of confidence then, that we can call ourselves intelligent (if we are to use this definition). This is because since we don't sufficiently understand the human mind we can't know if we truly understand or reason, only that we display behaviour that strongly indicates this. It is only from an individual's belief in their own intelligence and the similarity of human beings that we can presume that anyone else is intelligent. Even so, I expect another definition can be found that relies solely on external behaviour in order to qualify something as intelligent. How intelligent behaviour is achieved by a device does not determine whether it is intelligent, it only relates to our understanding of its intelligence.

I am not about to claim I can define intelligence, it would be foolhardy to do so, but I expect that an attempt would be at least be interesting. I postulate that intelligence is the ability to recognise pattern in information and to generate information accordingly; to extrapolate information using patterns observed. Where: a pattern is a repeated arrangement or association of patterns, and information is a sequence of patterns. The competence of a device at this process determines its level of intelligence.

If AI is simply about making a computer converse as ably as a human being (as I have once heard its goal stated) it first needs to explore the nature of intelligence at broad levels, even independently of human beings or computers, before it is decided how to approach such a task. We must either discover the mechanism of human intelligence or design one that produces intelligent behaviour. AI is concerned with the latter but does well to keep abreast of all related fields concerned with intelligence.

Why I would like to study AI

Human intelligence involves a lot of processes, many of which are apparently extremely sophisticated. I expect it will take a lot of effort to understand them. However, the understanding of intelligence is not the same as the understanding of the production of intelligent devices. I suspect that the fundamental basis to intelligence is simple. Simple enough that we need only provide a suitable medium in which intelligent processes will evolve by themselves.

 

A Vague Sketch of My Ideas on Neuron Models

Sent to 'neuron-uk@mailbase.ac.uk' and comp.ai on 4 Nov 97

I have an approach or neuron model that may be related to recent discussion concerning the part played by dendrites in governing neuron behaviour.

I did mention my idea to Prof. Stuart Sutherland (Sussex Univ.) in about 1986-7; primarily that neurons could recognise temporal relationships between their inputs (by dint of propagation delay according to length of dendrite), but as this was over a decade ago my idea was somewhat lacking in plausibility.

I assumed that the varying length of a neuron's dendrites determined a propagation delay for each incoming signal and thus allowed a neuron to recognise a temporal relationship between signals, i.e. it would fire if there was a co-incident 'peak' of cumulative signals above a particular
threshold and within a duration of a particular threshold. Connections would be modified according to whether (or how well) they contributed to this co-incident peak.

From implications of the lateralisation of brain function, I deduced that it is just the duration threshold that is different from one hemisphere to the other (simplistically speaking). Hence spatial (non-temporal) faculties would have a wide duration threshold, whereas logical/sequential (temporal) faculties would have a narrower one. In simulation of such neurons, I considered signals as analogue event train packets and retarded them by the propagation delay for each neuron they were connected to (and attenuated them according to the strength of the
connection). Proto-connections were continuously formed as tendrils from a given neuron, to extend toward any other neuron in the vicinity that tended to fire coincidently (given a distance related propagation delay). A connection was made when the tendril 'met' the coincident neuron.

My current hypothesis is that human intelligence results more from the topology of the human brain in facilitating feedback, as opposed to the
sophistication of the neuron. Indeed, it seems that the neuron is probably largely the same as it's always been, simply a chemical embodiment of a natural selection process (only neurons that adapt to their electrochemical environment survive).

Obviously this is an off-the-cuff abstract, but it might let you know at a glance whether my idea has any merit at all.

There are many other related ideas I have, e.g. concerning possible electrochemical explanations as to how neurons can connect to each other
and thus embody learning or long-term memory/associations.


Unfortunately it's been a while since I was working in the AI field (suffice it to say my research was prematurely cut short) and I don't have
access to much material (or funds). It's only because of recently obtaining a personal Internet account that I've started reading the comp.ai newsgroup and subscribing to neuron-uk...

I'm wondering if announcements of neuron models regarded in the same light these days as perpetual motion machines used to be. I'll try and enlighten myself of course, by gradually catching up, but if anyone would care to make any suggestions...?