通过着色器的OpenGL投影纹理映射

时间:2014-03-29 15:03:15

标签: c++ opengl mapping textures glsl

我试图通过在OpenGL 3+中使用着色器来实现简单的投影纹理映射方法。虽然网上有一些例子,但我在使用着色器创建一个工作示例时遇到了麻烦。

我实际上计划使用两个着色器,一个用于执行普通场景绘制,另一个用于投影纹理贴图。我有一个绘制场景void ProjTextureMappingScene::renderScene(GLFWwindow *window)的功能,我使用glUseProgram()在着色器之间切换。正常的绘图工作正常。但是,我不清楚我应该如何在已经纹理化的立方体上渲染投影纹理。我是否必须使用模板缓冲区或帧缓冲区对象(场景的其余部分应该不受影响)?

我也不认为我的投影纹理贴图着色器是正确的,因为第二次渲染立方体时它显示为黑色。此外,我尝试使用颜色进行调试,只有着色器的t组件似乎不为零(因此立方体显示为绿色)。我正在覆盖下面片段着色器中的texColor,仅用于调试目的。

VertexShader

#version 330

uniform mat4 TexGenMat;
uniform mat4 InvViewMat;

uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;

layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;

out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;

void main()
{
    vNormal = (N * vec4(inNormal, 0.0)).xyz;

    vec4 posEye    = MV * vec4(inPosition, 1.0);
    vec4 posWorld  = InvViewMat * posEye;
    projCoords     = TexGenMat * posWorld;

    // only needed for specular component
    // currently not used
    eyeVec = -posEye.xyz;

    gl_Position = P * MV * vec4(inPosition, 1.0);
}

FragmentShader

#version 330

uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;

in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;

out vec4 outputColor;

struct DirectionalLight
{
    vec3 vColor;
    vec3 vDirection;
    float fAmbientIntensity;
};

uniform DirectionalLight sunLight;

void main (void)
{
    // supress the reverse projection
    if (projCoords.q > 0.0)
    {
        vec2 finalCoords = projCoords.st / projCoords.q;
        vec4 vTexColor = texture(gSampler, finalCoords);
        // only t has non-zero values..why?
        vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
        //vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
        float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
        outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
}

创建TexGen矩阵

biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
                  0, 0.5f, 0, 0.5f,
                  0, 0, 0.5f, 0.5f,
                  0, 0, 0, 1);

    // 4:3 perspective with 45 fov
    projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
    projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
    projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
    projectorV = glm::lookAt(projectorOrigin, // projector origin
                                    projectorTarget,     // project on object at origin 
                                    glm::vec3(0.0f, 1.0f, 0.0f)   // Y axis is up
                                    );
    mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);

再次渲染多维数据集

我还不清楚立方体的模型视图应该是什么?它应该使用幻灯片放映机(现在是如此)还是普通视角投影仪的视图矩阵?目前,在场景视图中间,立方体呈现为黑色(或绿色,如果调试),就像它从幻灯片放映机中看到的那样(我做了一个切换热键,这样我就可以看到幻灯片放映机"看到了什么&#34 )。立方体也随着视图移动。如何将投影投射到立方体本身?

mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);

在主场景摄像头和幻灯片放映机之间切换

if (useMainCam)
{
    mCurrent   = glm::mat4(1.0f);
    mModelView = mModelView*mCurrent;
    mProjection = *pipeline->getProjectionMatrix();
}
else
{
    mModelView  = projectorV;
    mProjection = projectorP;
}

1 个答案:

答案 0 :(得分:5)

我已经解决了这个问题。我遇到的一个问题是我混淆了两个相机系统(世界和投影纹理相机)中的矩阵。现在,当我为投影纹理映射部分设置制服时,我使用正确的矩阵作为MVP值 - 与我用于世界场景的那些相同。

glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));

此外,invViewMatrix只是视图矩阵的逆,而不是模型视图(在我的情况下,这并没有改变行为,因为模型是标识,但它是错误的)。对于我的项目,我只想用投影纹理选择性地渲染一些对象。为此,对于每个对象,我必须确保当前着色器程序是使用glUseProgram(projectiveTextureMappingProgramID)的投影纹理程序。接下来,我计算此对象所需的矩阵:

texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);

回到着色器,顶点着色器是正确的,除了我为当前对象重新添加UV纹理坐标(inCoord)并将它们存储在texCoord中。

对于片段着色器,我更改了主函数以钳制投影纹理,使其不重复(我无法使其与客户端GL_CLAMP_TO_EDGE一起使用)并且我也使用默认值如果投影机没有覆盖整个物体,物体纹理和UV坐标(我也从投影纹理中移除了光线,因为在我的情况下不需要它):

void main (void)
{
    vec2 finalCoords    = projCoords.st / projCoords.q;
    vec4 vTexColor      = texture(gSampler, texCoord);
    vec4 vProjTexColor  = texture(projMap, finalCoords);
    //vec4 vProjTexColor  = textureProj(projMap, projCoords);
    float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));

    // supress the reverse projection
    if (projCoords.q > 0.0)
    {
        // CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
        if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
            //outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
            outputColor = vProjTexColor*vColor;
        else
            outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
    else
    {
        outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
}

如果你因为某些原因无法让着色器工作,你可以查看“OpenGL 4.0着色语言手册”(纹理章节)中的一个例子 - 我实际上错过了这个,直到我自己开始工作

除了上述所有内容之外,如果算法正常工作,调试的一个很好的帮助是绘制投影相机的平截头体(如线框)。我用了一个着色器来绘制视锥体。片段着色器只分配一个纯色,而顶点着色器在下面列出了解释:

#version 330

// input vertex data
layout(location = 0) in vec3 vp;

uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
    /*The transformed clip space position c of a
    world space vertex v is obtained by transforming 
    v with the product of the projection matrix P 
    and the modelview matrix MV

    c = P MV v

    So, if we could solve for v, then we could 
    genrerate vertex positions by plugging in clip 
    space positions. For your frustum, one line 
    would be between the clip space positions 

    (-1,-1,near) and (-1,-1,far), 

    the lower left edge of the frustum, for example.

    NB: If you would like to mix normalized device 
    coords (x,y) and eye space coords (near,far), 
    you need an additional step here. Modify your 
    clip position as follows

    c' = (c.x * c.z, c.y * c.z, c.z, c.z)

    otherwise you would need to supply both the z 
    and w for c, which might be inconvenient. Simply 
    use c' instead of c below.


    To solve for v, multiply both sides of the equation above with 

          -1       
    (P MV) 

    This gives

          -1      
    (P MV)   c = v

    This is equivalent to

      -1  -1      
    MV   P   c = v

     -1
    P   is given by

    |(r-l)/(2n)     0         0      (r+l)/(2n) |
    |     0    (t-b)/(2n)     0      (t+b)/(2n) |
    |     0         0         0         -1      |
    |     0         0   -(f-n)/(2fn) (f+n)/(2fn)|

    where l, r, t, b, n, and f are the parameters in the glFrustum() call.

    If you don't want to fool with inverting the 
    model matrix, the info you already have can be 
    used instead: the forward, right, and up 
    vectors, in addition to the eye position.

    First, go from clip space to eye space

         -1   
    e = P   c

    Next go from eye space to world space

    v = eyePos - forward*e.z + right*e.x + up*e.y

    assuming x = right, y = up, and -z = forward.
    */
    vec4 fVp = invMV * invP * vec4(vp, 1.0);
    gl_Position = P * MV * fVp;
}

这样使用制服(确保使用正确的矩阵):

// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));

要获得视锥体顶点着色器所需的输入顶点,您可以执行以下操作来获取坐标(然后将它们添加到顶点数组中):

glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right

glm::vec3   frustum_coords[36] = {
    // near
    ntl, nbl, ntr, // 1 triangle
    ntr, nbl, nbr,
    // right
    nbr, ftr, ntr,
    ftr, nbr, fbr,
    // left
    nbl, ftl, ntl,
    ftl, nbl, fbl,
    // far
    ftl, fbl, fbr,
    fbr, ftr, ftl,
    //bottom
    nbl, fbr, fbl,
    fbr, nbl, nbr,
    //top
    ntl, ftr, ftl,
    ftr, ntl, ntr
};

毕竟说完了,很高兴看到它的外观:

texture projection example image

正如你所看到的,我使用了两个投影纹理,一个是Blender的Suzanne猴头上的生物危害图像,以及地板上的笑脸纹理和一个小立方体。您还可以看到多维数据集部分被投影纹理覆盖,而其余部分则以其默认纹理显示。最后,您可以看到投影机相机的绿色平截头体线框 - 一切看起来都是正确的。