Automatic speech recognition consists of converting speech in a word sequence through algorithms in embedded systems. The accuracy has a high correlation with signal processing algorithms to improve quality and intelligibility. A manner to reduce the word error rate is through noise attenuation algorithms. In this paper, we aim to explore alternative setups to predict clean speech from noisy speech through nonlinear regression using deep neural networks (DNNs). We established a multi-condition training and test database, with three noise types and four main values of signal-to-noise ratios (SNRs). Furthermore, we also added, to the training set, signals with random SNRs generated with uniform distribution in a range between 0 and 15 dB to increase the generalization capacity of the network, and tested with random SNR signals generated under the same conditions. We use three objective metrics for performance evaluation, and we compare the results with classical algorithms such as shallow neural networks, Wiener filtering, and spectral subtraction. The results show that the DNN method outperforms the classical algorithms in all scenarios of SNR values, including mismatched scenarios with random SNRs.