|
BICS 2006
Click here for the Final 2006 program.
SCIENTIFIC PROGRAM
The conference will include invited plenary talks, contributed sessions, invited sessions, workshops, tutorials, keynotes
and panel discussions.
Body models as basis for perception, cognition and subjective experience.
by Prof. Dr. Holk Cruse Department of Biological Cybernetics Faculty of Biology University of Bielefeld Germany
An alternative solution to control behaviour in a reactive way is given by cognitive systems capable of planning
ahead. To this end the system has to be equipped with some kind of internal world model. A sensible basis of an internal world model might be a model of the systems own body. Using specific recurrent neural
networks, I show that a reactive system with the ability to control a body of complex geometry requires only a slight reorganization to form a cognitive system. This implies that the assumption that the evolution of
cognitive properties requires the introduction of new, additional modules, namely internal world models, is not justified. Rather, these modules may already have existed before the system obtained cognitive
properties. Furthermore, I discuss whether the occurrence of such world models may lead to systems having internal perspective.
The Haikonen models of machine consciousness, a summary and update. by: Pentti Haikonen, Dr. (technology) Nokia Research Center FI-00045 NOKIA GROUP Finland
Abstract
Folk psychology describes
human consciousness like "the immaterial feeling of being here". This is accompanied by the awareness of self, surroundings, personal past, present and expected future, awareness of pain and pleasure, awareness of
one's thoughts and mental content. Consciousness is also linked to thinking and imagination, which themselves are often equated to the flow of inner speech and inner imagery. Consciousness is related to self, mind
and free will. Consciousness is also seen to allow one to act and execute motions fluently, without any apparent calculations. A seen object can be readily grasped and manipulated. The environment is seen as
possibilities for action. Folk psychology is not science, yet these properties would be useful to a robot as such even if they were not accurate descriptions of the "real" consciousness. An engineering approach
to machine consciousness may be inspired by the folk psychology properties of consciousness; in doing so useful robots may be designed and perhaps some understanding about the "real" consciousness may be achieved as
well. In order to do this the folk psychology properties of consciousness must be evaluated in the terms of cognitive psychology and neurosciences and corresponding electronic systems must be devised. The speaker
has developed a machine model for consciousness along these lines. This model utilizes neural signals as transparent carriers of information; the material machinery remains hidden from the system and only the actual
meanings matter. The neural signals operate as distributed representations in a massively parallel perceptual architecture. This architecture supports the flow of inner speech and imagery. The system is controlled
by motivational factors that arise from hard-wired and learned emotional values. The cognitive machine is supposed to have a body with possibilities for physical action. These actions are executed lucidly as
responses to imagination or perceived environment, without numeric computations. Consciousness in the machine is seen as cooperative states between the numerous modalities; this leads to a large number of
cross-associations and hence to the possibility to report the situation to the machine itself and to others in various ways and also remember it for a while; the past will be connected to the present in "a stream of
consciousness". The speaker foresees that the presently seen development of various models for conscious machines and robots will eventually converge and lead to practical machines that appear to be conscious and
may even possess some kind of autonomous mind. These machines will find important applications in robotics and in information technology.
SPEAKER: Dr. Pentti O A Haikonen is Principal Scientist in the area of
Cognitive Technology at Nokia Research Center, Helsinki Finland. Haikonen received the M.Sc. (EE), Lic. in Tech. and Dr. Tech. degrees from the Helsinki University of Technology, Finland, in 1972, 1982 and 1999
respectively. He is the author of the book "The Cognitive Approach to Conscious Machines" (Imprint Academic, UK 2003). Haikonen has been studying machine cognition since mid 1990's. He has several patents (12 plus
several pending) on associative neurons and networks and digital video signal processing. Haikonen won the Philips national contest for young inventors in Finland in 1969 and has received two awards for outstanding
engineering achievements in 1989 and 1990. His research and hobby interests include machine cognition, electronic circuitry for cognition and the construction of exotic electronic gadgets.
Axiomatic
models and puzzles of consciousness: animals, dreams, volition and illusions by: Igor Aleksander Imperial College London
When looking at mechanistic models of what it is that makes our neural
systems create a personal sense of consciousness, it is helpful to break this phenomenon down into components called 'axioms'. This recognises that consciousness is not one thing but a combination of several
phenomena each of which has specific supporting neural mechanisms. They are, at least, (1) perception of a world with one's self in it, (2) imagination of experiences or fiction, (3) attentive selection of
experience, (4) planning future action, (5) emotional evaluation of plans. The neural mechanisms implied by these mechanisms are discussed in order to address some major puzzles about consciousness. Are animals
conscious? It depends on the neural mechanisms that can be discovered in their brains. What are dreams? This can be answered using automata theory that flows from the axiomatic mechanisms. Current work on the
emotional valuation of plans and some visual illusions will be described.
Future Perspective of Brain Science-Towards Mathematical Neuroscience and Engineering Neurocomputing
by: Shun-ichi Amari Director, RIKEN Brain Science Institute Laboratory for Mathematical Neuroscience
Hirosawa 2-1, Wako-shi, Saitama, Japan
Abstract: The brain is the most complex, highest performing information processing machine produced by nature. More than simply a sophisticated information processor,
the brain is home of the mind, and spirit. Since the brain is a biological organ, life science has studied its structure and function. It ranges from the molecular level, cell level, network level, system level to
the behavior level. On the other hand, the brain is an information processing system, so that it is a most important subject of study of information science and technology. It works in a completely different way
from the modern computer. It is a most interesting fundamental problem to know how the brain processes information associated with its higher functions, such as thinking, planning, speaking, learning and
self-organizing memory. Brain science must integrate information and life science approaches in our efforts to understand the brain.
Theoretical or computational neuroscience focuses on models of parts of the
real brain or specific information processing functions of the brain. This approach becomes more and more important to understand the brain, in cooperation with experimental neuroscience where detailed facts are
revealed. Another approach is more engineering-oriented, called neurocomputing, where artificial neural networks are used to create new information technology hinted from the brain.
Mathematical neuroscience
studies principles of information processing in the brain in abstract mathematical forms. It gives a basis to both computational neuroscience and neurocomputing. Brain science, as well as other life science, is
going to mature such that its principles are represented in mathematical forms together with biological facts.
The present lecture overviews brain science, its past and future, and search for possibility of
mathematical neuroscience.
Data Mining, Neural Networks and Rule Extraction
by: Dr. Jacek M. Zurada, Acting Chair S.T. Fife Professor of Electr. & Comp. Eng.
Dept. of Electrical and Computer Engineering University of Louisville, Louisville, KY 40292, USA
Abstract: The opening part of the talk introduces basic premises of data mining. It is shown how numerous
paradigms of neurocomputing prove useful and effective for data mining. These are data-driven modeling, feature extraction, dimensionality reduction, visualization, knowledge extraction and logic rule discovery.
Such modeling often involves handling of heterogenous, subjective, imprecise and noisy data.
The second part of the presentation outlines the concept of dimensionality reduction of input data vectors. This
technique leads to reduced models achieved through evaluation of sensitivity matrices of perceptron networks. When developing reduced models it is also useful to eliminate underutilized internal weights and also
neurons via pruning techniques. The concluding part of the talk reviews the capabilities of perceptron networks for producing understandable IF-THEN rules. Logic rule extraction via neural networks evaluation is
discussed and illustrated with examples.
SPEAKER: Dr. Jacek M. Zurada is the S.T. Fife Alumni Professor of Electrical and Computer Engineering at the University of Louisville, Louisville, Kentucky, USA. He
is the author of the 1992 PWS text Introduction to Artificial Neural Systems, co-editor of the 1994 IEEE Press volume Computational Intelligence: Imitating Life, and of the 2000 MIT Press book Knowledge Based
Neurocomputing. He is also the author or co-author of more than 250 journal and conference papers in the area of neural networks, computational intelligence, and analog and digital VLSI circuits. Dr. Zurada has
received a number of awards for distinction in research and teaching, including the 1993 Presidential Award for Research, Scholarship and Creative Activity. In 1998-2003 Dr. Zurada has been the Editor-in-Chief of
IEEE Transactions on Neural Networks. In 2004-05 he is serving as the IEEE Computational Intelligence Society President. He is an IEEE Fellow and NNS Distinguished Speaker.
Computational Intelligence in Feedback Systems
by: Marios Polycarpou, Professor and Interim Dept. Head Editor-in-Chief of the IEEE Transactions on Neural Networks
Dept. of Electrical and Computer Engineering University of Cyprus Nicosia / CYPRUS
Abstract: Recent technological advances in computing hardware, communications and real-time software have provided the
infrastructure for designing intelligent decision and control systems. Based on current trends, high performance feedback systems of the future will require greater autonomy in a number of frontiers. First, they
need to be able to deal with greater levels of, possibly, time-varying uncertainty. Second, they need to be able to handle uncertainties in the environment, which will allow the feedback system to be more flexible
in dealing with unanticipated events such as faults, obstacles and disturbances. Finally, key advances in distributed and mobile computing will allow for exciting possibilities in distributed decision making and
control by agent-type systems. This will require feedback systems to operate in distributed environments with cooperative capabilities. One of the key tools for realizing such advances in the performance and
autonomy of feedback systems is "learning." Feedback systems with learning capabilities can potentially help reduce modeling uncertainty on-line, make feedback systems more "intelligent" in the
presence of uncertainty in the environment, and initiate design methods for cooperative feedback systems in distributed environments. During the last decade there has been a variety of learning techniques developed
for feedback systems, based on structures such as neural networks, fuzzy systems, wavelets, etc. The goal of this presentation is to provide a unifying framework for designing and analyzing feedback systems with
learning capabilities. Various on-line approximation techniques and learning algorithms will be presented and illustrated, and directions for future research will be discussed.
Speaker: Prof. Marios
Polycarpou is a Professor and Interim Chair of the Department of Electrical and Computer Engineering at the University of Cyprus. He received the B.A. degree in Computer Science and the B.Sc. in Electrical
Engineering from Rice University, Houston, TX, USA in 1987, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Southern California, Los Angeles, CA, in 1989 and 1992 respectively. In
1992, he joined the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, Ohio, USA where he reached the rank of Professor. In 2001 he joined the newly
established Department of Electrical and Computer Engineering at the University of Cyprus. His teaching and research interests are in computational intelligence, systems and automation, with emphasis on adaptive
control, intelligent systems, neural networks, fault diagnosis and cooperative control. Dr. Polycarpou is currently the Editor-in-Chief of the IEEE Transactions on Neural Networks. He has published 42 refereed
journal papers and more than 100 articles in refereed conference proceedings and edited books. He was the recipient of the William H. Middendorf Research Excellence Award at the University of Cincinnati (1997) and
was nominated by students for the Professor of the Year award (1996). He is past Associate Editor of the IEEE Transactions on Neural Networks and the IEEE Transactions on Automatic Control, and he served as Vice
President, Conferences, of the IEEE Neural Network Society. His research has been funded by a number of agencies, including the European Commission, DARPA, US Air Force, American Water Works Association (AWWA),
NASA, Federal Highway Administration (FHWA), ONR, Ohio DOT, and the US Army.
Plenary Presentation on:
About the Neural Organization of Consciousness
by: Christoph von der Malsburg Frankfurt Institute of Advanced Studies and Computer Science Dept., University
of Southern California, Los Angeles
Abstract:
To create artificial consciousness we need to first understand its neural implementation in the brain. I will point to two issues that are crucial for
progress on this front -- the data structure of brain state, and the mechanism by which the brain organizes the neural equivalent of algorithms. Both issues are at present distorted grossly by generally held
prejudices. We know that the neurons of our brain, or at least many of them, can be interpreted as elementary symbols. The single cell dogma has it that my brain's state is fully characterized by a vector of
positive numbers specifying which of the neurons are active in my brain or, equivalently, which of the elementary symbols are active in my mind at the present time. This raises the by now widely discussed binding
problem, which is but the tip of an iceberg that will eventually sink the single cell dogma. A more promising data structure are dynamical graphs, composed of nodes (which roughly correspond to the
neuron-symbols) and links. Links represent relatedness between connected nodes, and they vary dynamically on the same time scale as node activities. In distinction to activity vectors, dynamical graphs constitute a
rich structural universe. The other prejudice, inherited from computer science and artificial intelligence, is the virtually complete disregard of the issue of how functional procedures (algorithms) are generated
and a narrow focus on nothing but their execution.
I will claim that the essential function of consciousness is the ability to bring to bear all of the brain's knowledge and abilities on acute problems the
individual is facing. Essential for this is the ability to recognize the homomorphy of the actual situation with past ones in the light of all possible sub-systems and modalities. This ability is the central
mechanism to organize new functional capabilities and it cannot even be formulated with vectors as data structure. Important neuronal correlates of consciousness are the details of dynamic link implementation in the
brain.
Neuroinformatics: what can e-Science offer Neuroscience
by: Professor Leslie S. Smith, University of Stirling, Scotland UKRI IEEE NNS Chapter Chair: http://www.cs.stir.ac.uk/ieee-nns-ukri/
Abstract: Neuroinformatics is Informatics applied to Neuroscience. Yet Neuroscience has been using computers for decades, so what's
new? Unlike Physics or Genomics, Neuroscience data has (with some honourable exceptions) rarely been shared. In particular, hard-won data from recording neurons has often been recorded, analysed within a lab (or set
of collaborating labs) and then left spinning on a disk, unavailable for further analysis. Computational neuroscientists would like to have access to such datasets in order to develop and validate their models.
But this is not straightforward. Technical problems include supplying all the metadata (information about the recording) so that researchers using the data understand its provenance fully. Commercial problems
include ensuring that data supplied is secure while it still retains commercial value (for example, for Pharmaceutical companies). Academic problems include ensuring that neuroscientists are given appropriate credit
for their work.
Neuroinformatics is, of course, broader than simply sharing recordings of neural activity: it encompasses the use of Informatics in understanding neural systems all the way from ion channels
and microanatomy of spines through to analysing recordings from electroencephalograms. International interest in this area is growing: the International Neuroinformatics Coordinating Facility (INCF) has recently
been launched (curently made up from nine countries: the Czech Republic, Finland, Germany, Italy, Japan, Norway, Sweden, Switzerland and the United States). In the UK, a new e-Science project entitled CARMEN (Code
Analysis, Repository and Modelling for e-Neuroscience) starts in October 2006, and is a large collaboration between 11 UK Universities, with links to other research groups. This talk will discuss what
Neuroinformatics can bring to Cognitive and Computational Neuroscience, and what the CARMEN project hopes to achieve.
Neuroprostheses for the Blind
by: Hans H. Bothe Technical University of Denmark Lyngby, Denmark
The main aim of Artificial
Vision is to restore some degree of sight to the profoundly blind Since blindness can result from defects at many different points along the visual pathway, there are accordingly a wide variety of proposed models
for an "Artificial Eye".
Axiomatic models and puzzles of consciousness: animals, dreams, volition and illusions
by: Igor Aleksander Imperial College London
Abstract: When looking at mechanistic models of what it is that makes our neural systems create a personal sense of
consciousness, it is helpful to break this phenomenon down into components called 'axioms'. This recognises that consciousness is not one thing but a combination of several phenomena each of which has specific
supporting neural mechanisms. They are, at least, (1) perception of a world with one's self in it, (2) imagination of experiences or fiction, (3) attentive selection of experience, (4) planning future action, (5)
emotional evaluation of plans. The neural mechanisms implied by these mechanisms are discussed in order to address some major puzzles about consciousness. Are animals conscious? It depends on the neural mechanisms
that can be discovered in their brains. What are dreams? This can be answered using automata theory that flows from the axiomatic mechanisms. Current work on the emotional valuation of plans and some visual
illusions will be described.
Synthetic Phenomenology: Exploiting Embodiment to Specify the Non-Conceptual Content of Visual Experience
by: Ron Chrisley Director, Centre for Research in Cognitive Science and
PAICS Research Group, Department of Informatics University of Sussex
An important, but relatively neglected, aspect of machine models of consciousness is the requirement for a scientific phenomenology, or
systematic means of characterizing the experiential states being modeled. In those few cases where need of such a phenomenology is acknowledged, the default approach is usually to use language-based specifications,
such as "the visual experience of a red bicycle leaning against a white wall". Such specifications are problematic for several reasons: 1) they are not fine-grained enough to capture the full detail of the
experience being modelled; 2) they are overly conceptual, in that they can specify the experience only of subjects that possess the concepts used in the specification (e.g., bicycle or leaning); 3) they are
"cold" in that there is no essential connection between the experience so specified and affect, while many experiences are "hot", having constitutive implications for action; 4) they are
disembodied, in that no explicit reference is made in the specification to the kinds of abilities necessary for being in a experiential state with that content. What is needed, then, is an alternative means of
specifying the content of experience that overcomes some or all of these limitations. An obvious way to deal with problems 1) and 2) for the case of visual experiences is to use visual images as specifications.
However, it would be a mistake to think that even the non-conceptual experience a given robot is modelling is best specified by displaying the raw output of its video camera. For example; the current
"output" of a human retina contains gaps or blindspots that are not part of experience. Furthermore, our visual experience, as opposed to our retinal output, at any given time is stable, encompassing more
than the current region of foveation, and is coloured to the periphery. I propose a means of specification, a synthetic phenomenology, that does justice to these aspects of visual experience, by exploiting the
interdependencies of perception and action of both the robot and the theorist to whom the specification is presented. In this way, some progress is also made toward overcoming problems 3) and 4).
|
|