Python Modern Opengl Perspective Projection

In this Python Modern Opengl we are going to talk about Perspective Projection. so we are using some codes from the previous articles. especially we are using the codes from the below link.

 

 

 

You need an image texture that should be 512 x 512 in your project directory. Also you need to install a library that is called pip install pillow , Iam using this image a Texture.

Python Modern Opengl Texturing
Python Modern Opengl Texturing

 

 

 

Python Modern Opengl Perspective Projection

Let’s create our example, so now this is the complete code for Python Modern Opengl Perspective Projection.

 

So in the above code iam not going to describe all codes because i have already describe these codes in the previous articles,  also if you want to learn more you can check the previous articles from the above links. but i will describe some important of them.

 

 

 

These are the vertex and fragment shaders

So at the top of vertex shader we have the version for the shader and we have three input values for our position, color and texture coordinates. and we have two output value for our color and texture coordinates. in the fragment shader we have two input value for our new color and the texture coordinates, and also we have and  output value for color with uniform variable.

 

 

 

 

What is Uniform Variable?

So a uniform is a global Shader variable declared with the “uniform” storage qualifier. These act as parameters that the user of a shader program can pass to that program. Their values are stored in a program object. Uniforms are so named because they do not change from one shader invocation to the next within a particular rendering call.this makes them unlike shader stage inputs and outputs, which are often different for each invocation of a shader stage.

 

 

What Are Shaders?

Shaders are little programs that rest on the GPU. These programs are run for each specific section of the graphics pipeline. so In a basic sense, shaders are nothing more than programs transforming inputs to outputs. Shaders are also very isolated programs

Vertex Shader

The vertex shader is a program on the graphics card that processes each vertex and its attributes as they appear in the vertex array. Its duty is to output the final vertex position in device coordinates and to output any data the fragment shader requires. That’s why the 3D transformation should take place here. The fragment shader depends on attributes like the color and texture coordinates, which will usually be passed from input to output without any calculations. Remember that our vertex position is already specified as device coordinates and no other attributes exist, so the vertex shader will be fairly bare bones.

 

 

 

Fragment Shader 

the output from the vertex shader is interpolated over all the pixels on the screen covered by a primitive. so These pixels are called fragments and this is what the fragment shader operates on. just like the vertex shader it has one mandatory output, the final color of a fragment. It’s up to you to write the code for computing this color from vertex colors, texture coordinates and any other data coming from the vertex shader.

 

 

OK now this is the important point of this article that we are going to talk about Perspective Projection. before this we are going to talk about Orthographic Projection.

 

 

 

Orthographic Projection

An orthographic projection matrix defines a cube-like frustum box that defines the clipping space where each vertex outside this box is clipped. When creating an orthographic projection matrix we specify the width, height and length of the visible frustum. All the coordinates that end up inside this frustum after transforming them to clip space with the orthographic projection matrix won’t be clipped. The frustum looks a bit like a container:

also frustum defines the visible coordinates and is specified by a width, a height and a near and far plane. Any coordinate in front of the near plane is clipped and the same applies to coordinates behind the far plane. The orthographic frustum directly maps all coordinates inside the frustum to normalized device coordinates since the w component of each vector is untouched; if the w component is equal to 1.0 perspective division doesn’t change the coordinates.

Python Modern Opengl Orthographic Projection
Python Modern Opengl Orthographic Projection

 

 

 

 

Perspective Projection

If you ever were to enjoy the graphics the real life has to offer you’ll notice that objects that are farther away appear much smaller. This weird effect is something we call perspective. Perspective is especially noticeable when looking down the end of an infinite motorway or railway as seen in the following image:

 

 

Perspective Projection
Perspective Projection

 

 

 

 

So now in here we have the code for the Perspective Projection

 

 

 

 

 

So run the complete code and this will be the result

Python Modern Opengl Perspective Projection
Python Modern Opengl Perspective Projection

Subscribe and Get Free Video Courses & Articles in your Email

 

4 thoughts on “Python Modern Opengl Perspective Projection”

  1. After adding the missing glfw import the code breaks
    OpenGL.error.GLError: GLError(
    err = 1281,
    description = b’invalid value’,
    baseOperation = glVertexAttribPointer,
    pyArgs = (
    -1,
    3,
    GL_FLOAT,
    GL_FALSE,
    32,
    c_void_p(12),
    ),
    cArgs = (
    -1,
    3,
    GL_FLOAT,
    GL_FALSE,
    32,
    c_void_p(12),
    ),
    cArguments = (
    -1,
    3,
    GL_FLOAT,
    GL_FALSE,
    32,
    c_void_p(12),
    )
    )

    Reply
    • I copied the code directly from these tutorials but I had this error in the last 3 tutorials: texturing rectangle, texturing cube and projection cube.

      I searched on internet and the problem is that glGetAttribLocation (the last valid funtion is returning -1 instead of the expected 1 when getting the color from the shader because it consider it not active. I found two solutions:

      The simplest: delete/comment lines 133 to 135
      The better: replace lines 127 to 140 by:

      # set the position of shader
      position = 0
      glBindAttribLocation(shader, position, ‘position’)
      glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, cube.itemsize * 8, ctypes.c_void_p(0))
      glEnableVertexAttribArray(position)

      # set the color of shader
      color = 1
      glBindAttribLocation(shader, color, ‘color’)
      glVertexAttribPointer(color, 3, GL_FLOAT, GL_FALSE, cube.itemsize * 8, ctypes.c_void_p(12))
      glEnableVertexAttribArray(color)

      # set the texCoords of shader
      texCoords = 2
      glBindAttribLocation(shader, texCoords, “InTexCoords”)
      glVertexAttribPointer(texCoords, 2, GL_FLOAT, GL_FALSE, cube.itemsize * 8, ctypes.c_void_p(24))
      glEnableVertexAttribArray(texCoords)

      If you look carefully you will notice that there is only 1 change (same change for the three attributes), instead of getting the attribute location you are binding it because you already know the order of the vectors: first position (location 0), second color (location 1), third texture (location 2)

      the glVertexAttribPointer and glEnableVertexAttribArray is the same.

      PS.- Same solution for the three tutorials glBindAttribLocation instead of glGetAttribLocation

      Reply

Leave a Comment

Share via
Copy link
Powered by Social Snap
×