You can get 100% correct gradients if you store the screen-space X/Y derivatives of depth in your G-Buffer, then use those to reconstruct the surface position derivatives. After that you need to transform the positional derivatives into decal UV derivatives. It adds quite a bit of per-pixel calculations, but it's definitely doable. I did it for my deferred texturing/decals demo and was even able to use anisotropic filtering. Start here, then follow the code through to here.
For something cheaper, you can just build a function that chooses the mip level based on distance from the camera to the pixel as well as the size and scale of the decal texture(s), then pass that mip level to SampleLevel. This will generally give you worse quality mip selection than proper derivatives, and won't let you use anistropic filtering. But it will let you avoid edge artifacts, and it can be pretty cheap.