ICSC
 interdisiplinary research

Search For:   
 
   Your Source For Scientific Conferences Home | About Us | Contact Us | Our Links    


Site Contents



Tutorials

Genetic Algorithms

Peter G. Anderson, PhD
Professor, Computer Science Department
Chief Scientist, Laboratory for Applied Computing
Room 74-1071
Rochester Institute of Technology
Rochester, New York 14623-5608
Phone: 585-475-2979 FAX: 585-475-5669
anderson@cs.rit.edu
http://www.cs.rit

Abstract

Genetic algorithms (GAs) solve problems in a means inspired by "selective breeding." GAs start with a random population of problem solutions; iteratively, better solutions are selected and allowed to breed (parts from two or more good solutions are composed to form children solutions); inferior solutions are selected to leave the population; and the overall fitnesses of the population members gradually increases until a suitable solution is discovered.

The Workshop will comprise an introduction to techniques of genetic algorithms and the types of problem solving GAs are applicable for. Special emphasis will be placed on problems such as scheduling.



Learning from data with generalization capability by neural networks and kernel methods



speaker: Dr. Vera Kurkova
Academy of Sciences
Institute of Computer Science Prague
Czech Republic
vera@cs.cas.cz

The goal of supervised learning is to adjust parameters of a neural network so that it approximates with a desired accuracy a functional relationship between inputs and outputs by learning from a set of examples in such a way that the network has a generalization capability, i.e., it can be used for processing new data that were not used for learning. To guarantee generalization, one needs some global knowledge of the desired input/output functional relationship, such as smoothness and lack of high frequency oscillations.

The lecture will present various approaches to modelling of learning with generalization capability based on regularization methods. Learning as a regularized optimization problem will be studied for a special class of function spaces called reproducing kernel Hilbert spaces, in which many types of oscillations and smoothness conditions can be formally described. It will be shown how methods developed for treating inverse problem related to differential equations from physics can be used as tools in mathematical theory of learning.

There will be described properties and relationships of important types of regularization techniques (Ivanov's regularization based on a restriction of the space of input/output functions, Tychonov's one adding to an empirical error enforcing fitting to empirical data a term penalizing undesired properties of input/output function and Miller's and Philips' combining Ivanov's and Tychonov's method). Various versions of the Representer Theorem describing the form of the unique solution of the learning problem will be derived. Algorithms based on such theorems will be discussed and compared with typical neural network algorithms designed for networks with limited model complexity.

                Home | About Us | Contact Us | Our Links

Copyright 2003 - All Rights Reserved - [ICSC] International Computing Sciences Conferences