In many applications of artificial neural networks, it is not clear a priori what network architecture to use. In addition, much time can be spent in searching for appropriate values of the training parameters. The paper describes a Genetic Algorithm for performing these tasks, developed and tested in the first instance for the representation of the XOR-problem by back-propagation. In an earlier paper the authors showed that an architecture and a set of training parameters are rapidly produced which on average take 50% fewer iterations than the ‘classical’ implementation given by Rumelhart and McClelland. They will then describe further work in applying this methodology to some pattern recognition problems in which the focus is shifted from representation to generalisation. The present results to the need for using a fitness function appropriate to the task, and suggest that provided this can be done, a GA is an excellent approach to the problem of configuring and training the ubiquitous multi-layer perception.