CarlWestman wrote:Marko - thanks for the tip, you are right, and I did notice in Audacity that it was getting clipped at the beginning. I'm going to switch back and make some manual recording level adjustments. I recently switched to the AUTO because I noticed how low it was recording, and thought AUTO might do better to get the guitar without amplifying background sounds. Guess not. Also, being close to the mic (within arm's reach) probably lends itself to being overwhelmed with a well-struck bass note.
I could be wrong, but my understanding is that 'auto-gain' works by detecting peak sample values in analog to digital conversion, and lowering the gain of the analog signal every time that the maximum digital sample value is reached. Optionally some auto-gain implementations also increase the gain when after certain time a certain sample value threshold isn't reached. Gain (auto or manual) should not have any effect to 'guitar sound to background noise' ratio, unless some kind of dynamics compression / limiting is coupled with the gain. The used gain setting could however affect the level of the hiss produced in the signal path inside the recorder. The only way to eliminate background noise (without digital noise gate or noise cancelling processing, which in my opinion destroy the fidelity of the recording) is to place the microphone close to the guitar. The farther it is from the guitar, the more room reflections and background noise there will be in the recording. Placing it too close to guitar then again enhances the string noise, could make the bass response boomy, and otherwise reflect the sound of the instrument in bad way, so there is always a trade-off involved.
A simple way to continue using auto-gain could be to strum a really loud chord right after you start recording, which will adjust the gain, and then just edit the chord out from the final recording. This is pretty much assuming that your auto-gain works only one way (decreases gain).
CarlWestman wrote:MP3 is being encoded off WAV at 256 kbps, but when wav is used to overdub AVI audio in the windows movie maker, it's coming out at 192 kbps. I still thought that such differences were well out of the range of human detection. Anyway ... Thanks and I'll adjust my recorder, settings.
An uncompressed CD quality WAV should be 2 (channels) * 44100 (samples per second) * 16 (bits per sample) which is approximately 1400kbps, and since the mp3 you posted was 128kbps, so I am wondering what does that 256kbps refer to? Are saying that you're encoding the audio into 256kbps mp3 before using it to overdub in windows movie maker?
I agree with you though that 192kbps stereo stream with any reasonable compression algorithm should be virtually indistinguishable from 256kbps and the original WAV. 128kbps is starting show some degradation in higher frequencies (not that I could identify that in a blind test), but for someone to say that your mp3's sound better than your videos (without being able to compare the same sample), and if the only difference between the two was the audio bit rate, the audio bit rate in the video should be well under 100kbit/s, possibly somewhere round 50kbit/s considering that CG recordings have very little high frequencies present, which are the first to suffer when the bit rate goes down.
So, if there is a consistent
difference between the quality of your audio only recordings and your videos, I would guess that it is caused by something other than the bit rates.