dazzle/src/routes/articles/the-graphics-pipeline/+page.svx
light7734 2d579757b8
All checks were successful
continuous-integration/drone/push Build is passing
wip
2025-05-06 16:49:21 +03:30

156 lines
6.5 KiB
Text

---
title: The Graphics Pipeline
date: "April 20 - 2025"
---
<script>
import Image from "../Image.svelte"
</script>
Ever wondered how games put all that gore on your display? All that beauty is brought into life by
a process called **rendering**, and at the heart of it, is the **graphics pipeline**.
In this article we'll dive deep into the intricate details of this beast.
Like any pipeline, the **graphics pipeline** is comprised
of several **stages**, each of which can be a pipeline in itself or even parallelized.
Each stage takes some input (data and configuration) to generate some output data for the next stage.
We can coarsely divide the pipeline into **4 stages**:
```math
\texttt{Application} \rightarrow \color{#fabd2f}{\texttt{GeometryProcessing}}\color{none} \rightarrow \texttt{Rasterization} \rightarrow \texttt{PixelProcessing}
```
The pipeline will then serve the output of the **pixel processing** stage, which is a **rendered image**,
to your pretty eyes using your display.
But to avoid drowning you in overviews, let's jump right into the gory details of the **geometry processing**
stage and have a recap afterwards to demystify this 4-stage division.
## Surfaces
Ever been jump-scared by this sight in an FPS? Why are things rendered like that?
<Image
paths={["/images/boo.png"]}
/>
In order to display a scene (like a murder scene),
we need to have a way of **representing** the **surface** of the composing objects (like corpses) in computer memory.
We only care about the **surface** since we won't be seeing the insides anyway---Not that we want to.
At this stage, we only care about the **shape** or the **geometry** of the **surface**.
Texturing, lighting, and all the sweet gory details come at a much later stage once all the **geometry** has been processed.
But how do we represent surfaces in computer memory?
## Vertices
There are several ways to **represent** the surfaces of 3d objects for a computer to understand.
For instance, **NURB surfaces** are great for representing **curves**, and it's all about the
**high precision** needed to do **CAD**. We could also do **ray-tracing** using fancy equations for
rendering **photo-realistic** images.
These are all great--ignoring the fact that they would take an eternity to process...
But what we need is a **performant** approach that can do this for an entire scene with
hundreds of thousands of objects (like a lot of corpses) in under a small fraction of a second. What we need is **polygonal modeling**.
**Polygonal modeling** enables us to do an exciting thing called **real-time rendering**. The idea is that we only need an
**approximation** of a surface to render it **realistically enough** for us to have some fun killing time!
We can achieve this approximation using a collection of **triangles**, **lines**, and **dots** (primitives),
which themselves are composed of a series of **vertices** (points in space).
<Image
paths={["/images/polygon_sphere.webp"]}
/>
A **vertex** is simply a point in space.
Once we get enough of these **points**, we can connect them to form **primitives** such as **triangles**, **lines**, and **dots**.
And once we connect enough of these **primitives** together, they form a **model** or a **mesh** (that we need for our corpse).
With some interesting models put together, we can compose a **scene** (like a murder scene :D).
<Image
paths={["/images/bunny.jpg"]}
/>
But let's not get ahead of ourselves. The primary type of **primitive** that we care about during **polygonal modeling**
is a **triangle**. But why not squares or polygons with a variable number of edges?
## Why Triangles?
In **Euclidean geometry**, triangles are always **planar** (they exist only in one plane),
any polygon composed of more than 3 points may break this rule, but why does polygons residing in one plane so important
to us?
<Image
paths={["/images/planar.jpg", "/images/non_planar_1.jpg", "/images/non_planar_2.png"]}
/>
When a polygon exists only in one plane, we can safely imply that **only one face** of it can be visible
at any one time; this enables us to utilize a huge optimization technique called **back-face culling**.
Which means we avoid wasting a ton of **precious processing time** on the polygons that
we know won't be visible to us. We can safely **cull** the **back-faces** since we won't
be seeing the **back** of a polygon when it's in the context of a closed-off model.
We figure this out by simply using the **winding order** of the triangle to determine whether we're looking at the
back of the triangle or the front of it.
Normal surface
Triangles also have a very small **memory footprint**; for instance, when using the **triangle-strip** topology (more on this very soon), for each additional triangle after the first one, only **one extra vertex** is needed.
The most important attribute, in my opinion, is the **algorithmic simplicity**.
Any polygon or shape can be composed from a **set of triangles**; for instance, a rectangle is
simply **two coplanar triangles**.
Also, it is becoming a common practice in computer science to break down
hard problems into simpler, smaller problems. This will be more convincing when we cover the **rasterization** stage :)
Bonus point: present-day **hardware** and **algorithms** have become **extremely efficient** at processing
triangles (sorting, rendering, etc) after eons of evolving around them.
## Primitive Topology
So, we got our set of triangles, but how do we make a model out of them?
## Indices
## Input Assembler
## Coordinate System -- Local Space
## Coordinate System -- World Space
## Coordinate system -- View Space
## Coordinate system -- Clip Space
## Coordinate system -- Screen Space
## Vertex Shader
## Tessellation & Geometry Shaders
## Let's Recap!
## Rasterizer
## Pixel Shader
## Output Merger
## The Future
## Conclusion
## Sources
[Tomas Akenine Moller - Real-Time Rendering 4th Edition](https://www.realtimerendering.com/intro.html)
<br/>
[LearnOpenGL - Hello Triangle](https://learnopengl.com/Getting-started/Hello-Triangle)
[LearnOpenGL - Face Culling](https://learnopengl.com/Advanced-OpenGL/Face-culling)
[Wikipedia - Polygonal Modeling](https://en.wikipedia.org/wiki/Polygonal_modeling)
[Wikipedia - Non-uniform Rational B-spline Surfaces](https://en.wikipedia.org/wiki/Non-uniform_rational_B-spline)
[Wikipedia - Computer Aided Design (CAD)](https://en.wikipedia.org/wiki/Computer-aided_design)
[Stackoverflow - Why do 3D engines primarily use triangles to draw surfaces?](https://stackoverflow.com/questions/6100528/why-do-3d-engines-primarily-use-triangles-to-draw-surfaces)