Understanding Shaders 101

The most spooky area of game programming especially if you come from a non-mathematical and non-computers background and want to create super realistic games. I am neither a shader ninja nor expert, so correct me wherever necessary.😇


Not everyone can write shaders!

Not trying to kill motivation here 😀


I can’t promise that you will be a shader guru after you done with this series of posts, but at least I can give you a start from where to begin. I will share everything that I have learned about shaders in this series of posts – Understanding Shaders 10x. In the end of each post of this series, I will share important links for further study and practice. For better understanding, go through the links mentioned under – More Info.

This post is about the history of shaders and what motivated programmers to use shaders. Will not get into the code, at least not in this post. Just a warm up!!

In the beginning, there were only ‘Fixed Functions’ and it was good

We will not spend too much time on fixed functions as we lot of ground to cover in this post. Fixed function pipeline refers to the older generation pipeline that was not really controllable — the exact method in which the geometry was transformed, and how fragments (pixels) acquired depth and color values were built-in to the hardware and couldn’t change. But then programmable pipeline came that gave lots of flexibility to the programmer. In the fixed pipeline, a lot of work had to be done on the CPU. With GPU’s, you can move that work to the video card processors (which are specially designed for that stuff, and thus VERY quick with it). For example, animating a character with a skeleton. First, we needed to do all the transformations on the CPU. Now we can do it in a vertex shader. That saves time on the CPU, which you can use for other (non-graphical) stuff. Physics or AI.


So what exactly are Shaders?


In the field of computer graphics, a shader is a computer program that is used to do shading: the production of appropriate levels of light, darkness, and color within an image, or, in the modern era, also to produce special effects or do video post-processing.


Ok, that’s it!! now you are on your own 😉.



Let us dive in

Before DirectX 8 and the OpenGL ARB assembly language, GPU’s had a fixed way to transform pixels and vertices, called “The fixed pipeline” (mentioned above). This made it impossible for developers to change how pixels and vertices were transformed and processed after passing them to the GPU and made games looked quite similar graphics wise.

But before we start, I want to mention something about 3D meshes.

So all 3D models that you see are made up of triangles(Why? Because any plane can be break into a triangle with all vertices co-planar) with each having 3 vertices and a face. Matrices play a very important role representing. (More info)

Programs that we write whether in Java/C/C++/C# are processed on CPU, the same way shaders are processed on GPU. It is worth noting that GPUs has parallel architecture whereas CPUs process data sequentially. (More Info)


Recap – All 3D geometry is made up of the primitive i.e triangle which has vertices and faces.  Before shaders we have to rely on Fixed functions in which all the heavy lifting was done by CPUs and many customizations on fragment and depth cannot be done, then came shaders, which are also programs but they are executed on GPUs which are massively parallel architecture.


Enough! Now seriously let us do shaders…😜

As rendering a 3D mesh on a 2D screen is not that simple, as we have to calculate depths, blending, transparency lightning and so many more things, shader pipeline is divided into certain parts or types with each having something to contribute to the final output.

  1. Vertex Shader
  2. Geometry Shader
  3. Tessellation/HULL Shader
  4. Fragment/Pixel Shader
  5. Compute Shader

We will be focusing on Vertex and Fragment shaders in this series. But before we touch them a brief about the remaining three shaders.

Tessellation/HULL shader – As the name suggests it does all the tessellations related stuff in the pipeline i.e breaking the big surfaces into smaller surfaces – Triangles. It is called after Vertex Shader. In Unity rendering pipeline it acts as a Fixed-Function shader which means they are not programmable and are embedded in the hardware.

Geometry Shader – Geometry Shader takes primitive as input i.e vertex or vertices and is called after Vertex or Fixed Function Vertex Post-Processing stage. It can manipulate or alter the Geometry with the help of inputs that it has received. A point worth noting is that Vertex shader takes only single vertex as input whereas Geometry Shader can take one vertex or two vertexes (line) three vertex(triangle).

Compute Shader – This is a general purpose shader, which is used outside the pipeline.  It is not used to draw a primitive or shade a pixel. Its main role is to accelerate parts of game rendering.



I know the flowchart above is a bit too much, but that is what exactly happens behind the scene. Don’t worry I have a simpler version which we will be focusing in this series.😇



I would love to discuss Vertex Shader and Fragment Shader but before jump into these Shader types, we have to learn some important concepts like Rasterizer, Lightning Path, Transformations and basics of Vector Mathematics. So bear with me, in next post, we will dissect these topics.

Helpful Links – Triangle And Pixels

I hope this post must have helped you understand the basics of shader and its core components. We saw how a 3D mesh is rendered on a screen with help of shader and what are the stages involved. Why we don’t use Fixed Function Pipeline anymore and the very same reason motivated the graphics programmers to write Shaders i.e the customization over pixels and light.


If you are curious about shaders and want to know more them, then leave a comment or email me at contact@nipundavid.com

If you are interested to know more about the work I have done than do the same as above 😀


Please follow and like us: