Speech recognition is a field of research that deals with automatic machine understanding of speech. During the last decades, speech recognition has flourished both, for its potential applications and the improvement of recognition systems. Current automatic speech recognition systems are capable of high reliability when tested in controlled conditions (normally this implies high signal to noise ratio and similar conditions in training and testing), but accuracy is significantly degraded when speech is subject to sources of variability. Among the possible sources of variability we study in this work those typically introduced by the transmitting channel, and more specifically those that completely remove parts of the spectrum (we call this band-limiting distortions). Band-limiting distortions may appear, for example, in historical recordings, where due to the limited capabilities of recording equipment and storage units, low sampling frequencies may constrain the available bandwidth below 4 or even 2 kHz. Telephone-transmitted signals are another example where speech is band-limited, and similarly, signals transmitted from on-board systems (like cars or aeroplanes) may present different filters. One of the keys to success in ASR is to achieve similar conditions between training data used to obtain reliable statistical acoustic models and test speech. In particular when speech is subject to distortions such as noise, filters, etc., models trained under different conditions (for example clean and full-bandwidth data) will suffer important accuracy degradation. An important means of robustness is a reliable signal parameterizer that extracts relevant information from the linguistic/textual content of speech, while discarding irrelevant information for the goal of ASR such as external noises, filters, etc. In addition to this, when the parameterizer itself is not able to remove all the undesired variability information, it is possible to modify the acoustic models to match the conditions of input speech or conversely, distorted speech features may be modified to resemble the conditions of data used to train the acoustic models. In our work we study different implementations of the latter approach and compare results to those obtained with model-side robustness. We also show that it is possible to combine both types of approaches for increased accuracy. Feature compensation, may outperform model-based solutions in accuracy and usability in particular conditions. For example when the number of distortions affecting test data is large, it allows keeping active a single speech recognizer and compensation of speech from different distortions to resemble undistorted speech. Additionally, when there is insufficient training data, our algorithms have been shown to offer competing or even superior accuracy. Also, in portable devices where memory and computational loads are an important limitation, feature compensation offers a light and reliable solution, as it allows the storage of a single set of acoustic models and (as shown in our work the memory space required to store corrector functions is typically between one and two orders of magnitude below that of full systems) performs ASR with only one recognition engine. In our work, we propose algorithms for feature compensation, whose common ground is the learning of a transformation between the distorted (band-limited spectrum and possibly affected by additive noise, too) and undistorted (full-bandwidth and clean) feature spaces. This transformation is applied to distorted features, in order to obtain pseudo-undistorted features, so that they can be used for recognition with models trained under standard conditions. We propose different solutions that meet the possible constraints of real systems, like availability of stereo-data for training (speech samples recorded simultaneously in clean and band-limited environments), training data scarcity, memory limitations, blind classification and compensation of multiple distortions, etc. A large number of experiments have been conducted and shed light in the potential and performance of different settings and variations of the feature compensation approaches proposed, and on a variety of problems and conditions. Results are always compared to those of other possible solutions to the problem, consisting in classical robustness methods like Cepstral Mean Normalization, and model-side robustness (model adaptation and retraining). Evaluation is performed applying artificial band-limiting filters on full-bandwidth data, as well as with real telephone data, which poses more challenging conditions due to the existence of multiple distortions (convolutions and additive noises).