The performance of single-server queues with independent interarrival intervals and service demands is well understood, and often analytically tractable. In particular, the M/M/1 queue has been thoroughly studied, due to its analytical tractability. Little is known, though, when autocorrelation is introduced into interarrival times or service demands, resulting in loss of analytical tractability. Even the simple case of an M/M/1 queue with autocorrelations does not appear to be well understood. Such autocorrelations do, in fact, abound in real-life systems, and worse, simplifying independence assumptions can lead to very poor estimates of performance measures. This paper reports the results of a simulation study of the impact of autocorrelation on performance in an FIFO queue. The study uses two computer methods for generating autocorrelated random sequences, with different autocorrelation characteristics. The simulation results show that the injection of autocorrelation into interarrival times, and to a lesser extent into service demands, can have a dramatic impact on performance measures. From a performance viewpoint, these effects are generally deleterious, and their magnitude depends on the method used to generate the autocorrelated process. The paper discusses these empirical results and makes some recommendations to practitioners of performance analysis of queuing systems.