I ran into a problem when testing a program on an AMD GPU. When tested on Nvidia and Intel HD Graphics, everything works fine. On AMD, the problem occurs precisely when trying to bind the texture. Because of this problem, the shader has no shadow maps and only a black screen is visible. Id textures and other parameters are successfully loaded. Below are the code snippets:
Here is the complete problem area of the rendering code:
#define cfgtex(texture, internalformat, format, width, height) glBindTexture(GL_TEXTURE_2D, texture); \
glTexImage2D(GL_TEXTURE_2D, 0, internalformat, width, height, 0, format, GL_FLOAT, NULL);
void render() {
for (GLuint i = 0; i < count; i++) {
// start id = 10
glUniform1i(samplersLocations[i], startId + i);
glActiveTexture(GL_TEXTURE0 + startId + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, texturesIds[i]);
}
renderer.mainPass(displayFB, rbo);
cfgtex(colorTex, GL_RGBA16F, GL_RGBA, params.scrW, params.scrH);
cfgtex(dofTex, GL_R16F, GL_RED, params.scrW, params.scrH);
cfgtex(normalTex, GL_RGB16F, GL_RGB, params.scrW, params.scrH);
cfgtex(ssrValues, GL_RG16F, GL_RG, params.scrW, params.scrH);
cfgtex(positionTex, GL_RGB16F, GL_RGB, params.scrW, params.scrH);
glClear(GL_COLOR_BUFFER_BIT);
glClearBufferfv(GL_COLOR, 1, ALGINE_RED); // dof buffer
// view port to window size
glViewport(0, 0, WIN_W, WIN_H);
// updating view matrix (because camera position was changed)
createViewMatrix();
// sending lamps parameters to fragment shader
sendLampsData();
glEnableVertexAttribArray(cs.inPosition);
glEnableVertexAttribArray(cs.inNormal);
glEnableVertexAttribArray(cs.inTexCoord);
// drawing
//glUniform1f(ALGINE_CS_SWITCH_NORMAL_MAPPING, 1); // with mapping
glEnableVertexAttribArray(cs.inTangent);
glEnableVertexAttribArray(cs.inBitangent);
for (size_t i = 0; i < MODELS_COUNT; i++) drawModel(models[i]);
for (size_t i = 0; i < LAMPS_COUNT; i++) drawModel(lamps[i]);
glDisableVertexAttribArray(cs.inPosition);
glDisableVertexAttribArray(cs.inNormal);
glDisableVertexAttribArray(cs.inTexCoord);
glDisableVertexAttribArray(cs.inTangent);
glDisableVertexAttribArray(cs.inBitangent);
...
}
renderer.mainPass
code:
void mainPass(GLuint displayFBO, GLuint rboBuffer) {
glBindFramebuffer(GL_FRAMEBUFFER, displayFBO);
glBindRenderbuffer(GL_RENDERBUFFER, rboBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, params->scrW, params->scrH);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
glsl:
#version 400 core
...
uniform samplerCube shadowMaps[MAX_LAMPS_COUNT];
There are no errors during the compilation of shaders. As far as I understand, the texture for some reason does not bind. Depth maps themselves are drawn correctly.
I access the elements of the array as follows:
for (int i = 0; i < count; i++) {
...
depth = texture(shadowMaps[i], fragToLight).r;
...
}
Also, it was found that a black screen occurs when the samplerCube
array is larger than the bound textures. For example MAX_LAMPS_COUNT = 2
and count = 1, then
uniform samplerCube shadowMaps[2];
glUniform1i(samplersLocations[0], startId + 0);
glActiveTexture(GL_TEXTURE0 + startId + 0);
glBindTexture(GL_TEXTURE_CUBE_MAP, texturesIds[0]);
In this case, there will be a black screen.
But if MAX_LAMPS_COUNT = 1
(uniform samplerCube shadowMaps[1]
) then shadows appear, but a new problem also arises:
Do not pay attention to the fact that everything is greenish, this is due to incorrect color correction settings for the video card.
Any ideas? I would be grateful for the help