The Free On-line Dictionary of Computing (30 December 2018):
Nyquist Theorem
A theorem stating that when an analogue
waveform is digitised, only the frequencies in the waveform
below half the sampling frequency will be recorded. In
order to reconstruct (interpolate) a signal from a sequence of
samples, sufficient samples must be recorded to capture the
peaks and troughs of the original waveform. If a waveform is
sampled at less than twice its frequency the reconstructed
waveform will effectively contribute only noise. This
phenomenon is called "aliasing" (the high frequencies are
"under an alias").
This is why the best digital audio is sampled at 44,000 Hz -
twice the average upper limit of human hearing.
The Nyquist Theorem is not specific to digitised signals
(represented by discrete amplitude levels) but applies to any
sampled signal (represented by discrete time values), not just
sound.
Nyquist
(http://geocities.com/bioelectrochemistry/nyquist.htm)
(the man, somewhat inaccurate).
(2003-10-21)