feat: add note
All checks were successful
continuous-integration/drone/push Build is passing

This commit is contained in:
light7734 2025-05-12 15:25:01 +03:30
parent 2d579757b8
commit 8dcf3ad973
Signed by: light7734
GPG key ID: B76EEFFAED52D359
5 changed files with 142 additions and 15 deletions

View file

@ -53,7 +53,9 @@
]
},
"dependencies": {
"@lucide/svelte": "^0.509.0",
"katex": "^0.16.22",
"lucide": "^0.509.0",
"mermaid": "^11.6.0",
"playwright": "^1.52.0",
"rehype-katex": "^5.0.0",

20
pnpm-lock.yaml generated
View file

@ -8,9 +8,15 @@ importers:
.:
dependencies:
'@lucide/svelte':
specifier: ^0.509.0
version: 0.509.0(svelte@5.28.2)
katex:
specifier: ^0.16.22
version: 0.16.22
lucide:
specifier: ^0.509.0
version: 0.509.0
mermaid:
specifier: ^11.6.0
version: 11.6.0
@ -442,6 +448,11 @@ packages:
'@jridgewell/trace-mapping@0.3.25':
resolution: {integrity: sha512-vNk6aEwybGtawWmy/PzwnGDOjCkLWSD2wqvjGGAgOAwCGWySYXfYoxt00IJkTF+8Lb57DwOb3Aa0o9CApepiYQ==}
'@lucide/svelte@0.509.0':
resolution: {integrity: sha512-rG/oNz5HiuG+9xZawNxD08x5sIgvfIcFVDx7J2YAOHNK9+zDAro/DAhWLOdmE0eag5CYzxwAWh8LVowXpZvW/g==}
peerDependencies:
svelte: ^5
'@mermaid-js/parser@0.4.0':
resolution: {integrity: sha512-wla8XOWvQAwuqy+gxiZqY+c7FokraOTHRWMsbB4AgRx9Sy7zKslNyejy7E+a77qHfey5GXw/ik3IXv/NHMJgaA==}
@ -1885,6 +1896,9 @@ packages:
lru-cache@10.4.3:
resolution: {integrity: sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==}
lucide@0.509.0:
resolution: {integrity: sha512-+PUi/uilEjgcBuaLXq4y4fh/AHLgFuFyNrMbezuhofkiqRZbbj6cf5E29VndCuFcGsGpo9WIKocmfBB2GTvS7w==}
lz-string@1.5.0:
resolution: {integrity: sha512-h5bgJWpxJNswbU7qCrV0tIKQCaS3blPDrqKWx+QxzuzL1zGUzij9XCWLrSLsJPu5t+eWA/ycetzYAO5IOMcWAQ==}
hasBin: true
@ -2943,6 +2957,10 @@ snapshots:
'@jridgewell/resolve-uri': 3.1.2
'@jridgewell/sourcemap-codec': 1.5.0
'@lucide/svelte@0.509.0(svelte@5.28.2)':
dependencies:
svelte: 5.28.2
'@mermaid-js/parser@0.4.0':
dependencies:
langium: 3.3.1
@ -4446,6 +4464,8 @@ snapshots:
lru-cache@10.4.3: {}
lucide@0.509.0: {}
lz-string@1.5.0: {}
magic-string@0.30.17:

View file

@ -0,0 +1,84 @@
<script lang="ts">
import { Info } from '@lucide/svelte';
export let title;
</script>
<div class="note">
<div class="head">
<div class="icon">
<Info />
</div>
<div class="horiz_line"></div>
</div>
<div class="content">
<div class="line"></div>
<div class="slot">
<p>{title}</p>
<slot />
</div>
</div>
</div>
<style>
@import url('https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:ital,wght@0,100;0,200;0,300;0,400;0,500;0,600;0,700;1,100;1,200;1,300;1,400;1,500;1,600;1,700&display=swap');
.note {
display: block;
margin: 1em 0 1em 0;
border-radius: 2px;
}
.head {
display: flex;
flex-wrap: wrap;
width: 100%;
text-align: left;
}
.content {
display: flex;
margin-bottom: -1em;
padding: 0;
margin-right: auto;
margin-left: 0;
}
.line {
background-color: #8ec07c;
width: 0.1em;
margin-left: 0.85em;
margin-bottom: 1em;
margin-top: 0.5em;
}
.horiz_line {
background-color: #8ec07c;
height: 0.1em;
margin-left: 0.5em;
margin-top: 0.7em;
display: inline-block;
flex: 1;
}
.slot {
margin-left: 1em;
width: 100%;
}
.slot > p {
flex: 1;
font-family: 'IBM Plex Mono', monospace;
display: inline-block;
margin: 0;
font-weight: bolder;
}
.icon {
}
.icon:first-child {
}
</style>

View file

@ -33,11 +33,6 @@ h2 {
}
.katex {
background-color: #1d2021;
padding: 1em;
border: 1px solid #928374;
border-radius: 2px;
width: max-content;
margin-left: auto;
margin-right: auto;
margin-left: 0;
}

View file

@ -5,6 +5,7 @@ date: "April 20 - 2025"
<script>
import Image from "../Image.svelte"
import Note from "../Note.svelte"
</script>
Ever wondered how games put all that gore on your display? All that beauty is brought into life by
@ -14,13 +15,34 @@ In this article we'll dive deep into the intricate details of this beast.
Like any pipeline, the **graphics pipeline** is comprised
of several **stages**, each of which can be a pipeline in itself or even parallelized.
Each stage takes some input (data and configuration) to generate some output data for the next stage.
We can coarsely divide the pipeline into **4 stages**:
```math
\texttt{Application} \rightarrow \color{#fabd2f}{\texttt{GeometryProcessing}}\color{none} \rightarrow \texttt{Rasterization} \rightarrow \texttt{PixelProcessing}
```
<Note title="A coarse division of the graphics pipeline">
The pipeline will then serve the output of the **pixel processing** stage, which is a **rendered image**,
Application --> Geometry Processing --> Rasterization --> Pixel Processing --> Presentation
</Note>
Before the heavy rendering work starts on the **GPU**, we simulate the world through systems
like physics engine, game logic, networking, etc, during the **Application** stage.
This stage is mostly ran on the **CPU**, therefore it is very efficient on executing
**sequentially dependendent** logic.
<Note title="Sequentially dependendent logic">
A type of execution flow where the operations depend on the results of previous steps, limiting parallel execution.
In other words, **CPUs** are great at executing **branch-heavy** code, and **GPUs** are geared
towards executing **branch-less** or **branch-light** code.
</Note>
The updated scene data is then prepped and fed to the GPU for **Geometry Processing**, this is
where we figure out where everything ends up on our screen.
Then the final geometric data are converted into **pixels** and prepped for pixel processing stage via a process called **Rasterization**.
The **Pixel Processing** stage then uses the rasterized geometry to do **lighting**, **texturing**, and all the sweet gory details.
The pipeline will then serve (present) the output of the **pixel processing** stage, which is a **rendered image**,
to your pretty eyes using your display.
But to avoid drowning you in overviews, let's jump right into the gory details of the **geometry processing**
stage and have a recap afterwards to demystify this 4-stage division.
@ -74,6 +96,11 @@ With some interesting models put together, we can compose a **scene** (like a mu
But let's not get ahead of ourselves. The primary type of **primitive** that we care about during **polygonal modeling**
is a **triangle**. But why not squares or polygons with a variable number of edges?
<Note title="Neque porro quisquam est qui dolorem">
Lorem ipsum dolor sit **amet**, consectetur adipiscing elit. **Fusce** rhoncus eleifend elementum. Mauris quis **arcu justo**. Proin pellentesque eleifend sapien, quis dictum lacus sodales eget. Pellentesque congue dapibus libero, nec finibus est tempus varius. Duis tincidunt arcu nulla, **ultrices** malesuada tellus convallis a. Aenean pulvinar ligula arcu, **vitae** cursus mi maximus sed. Morbi iaculis efficitur suscipit. Cras **in** vehicula est, ac molestie tortor. Donec sed **quam** pulvinar, pulvinar nulla vel, aliquet enim. Curabitur **tempus**, nisi quis posuere lacinia, sapien est maximus libero, vehicula hendrerit nisi **elit** sed ligula. Pellentesque habitant morbi tristique senectus et netus et malesuada fames **ac** turpis egestas. Praesent auctor velit eu justo suscipit, quis lobortis **elit** placerat.
</Note >
## Why Triangles?
In **Euclidean geometry**, triangles are always **planar** (they exist only in one plane),
@ -98,10 +125,9 @@ Normal surface
Triangles also have a very small **memory footprint**; for instance, when using the **triangle-strip** topology (more on this very soon), for each additional triangle after the first one, only **one extra vertex** is needed.
The most important attribute, in my opinion, is the **algorithmic simplicity**.
Any polygon or shape can be composed from a **set of triangles**; for instance, a rectangle is
simply **two coplanar triangles**.
Also, it is becoming a common practice in computer science to break down
hard problems into simpler, smaller problems. This will be more convincing when we cover the **rasterization** stage :)
Any polygon or shape can be composed from a **set of triangles**; for instance, a rectangle is simply **two coplanar triangles**.
Also, it is a common practice in computer science to break down hard problems into simpler, smaller problems.
This will be a lot more convincing when we cover the **rasterization** stage :)
Bonus point: present-day **hardware** and **algorithms** have become **extremely efficient** at processing