Skip to content

Commit

Permalink
Merge pull request #74 from suny-am/fix-typos-2
Browse files Browse the repository at this point in the history
Fix typos # 2
  • Loading branch information
eliemichel authored Feb 8, 2025
2 parents be924c4 + 69929b8 commit 3c28d65
Show file tree
Hide file tree
Showing 13 changed files with 18 additions and 18 deletions.
2 changes: 1 addition & 1 deletion basic-3d-rendering/3d-meshes/a-simple-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ out.position = vec4f(position.x, position.y * ratio, 0.0, 1.0);
Congratulations, you have learned most of what there is to know about **trigonometry** for computer graphics!

```{hint}
**If you cannot remember** which one is the $cos$ and which one is the $sin$ among `alpha` and `beta` (don't worry it happens to us all the time), **just take an example** of very simple rotation: `angle = 0`. In such a case, we need `alpha = 1` and `beta = 0`. If you look at a plot of the $sin$ and $cos$ functions you'll quickly see that $cos(0) = 1$ and $sin(0) = 0$
**If you cannot remember** which one is the $cos$ and which one is the $sin$ among `alpha` and `beta` (don't worry! It happens to everyone), **just take an example** of very simple rotation: `angle = 0`. In such a case, we need `alpha = 1` and `beta = 0`. If you look at a plot of the $sin$ and $cos$ functions you'll quickly see that $cos(0) = 1$ and $sin(0) = 0$
```

```{important}
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/3d-meshes/depth-buffer.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ The **Z-Buffer algorithm** is what the GPU's render pipeline uses to solve the v
As a result, only the fragment with the lowest depth is visible in the resulting image. The depth value for each pixel is stored in a special **texture** called the **Z-buffer**. This is the only memory overhead required by the Z-buffer algorithm, making it a good fit for real time rendering.

```{topic} About transparency
The fact that only the fragment with the lowest depth is visible is **not guaranteed** when fragments have **opacity values that are neither 0 or 1** (and alpha-blending is used). Even worst: the order in which fragments are emitted has an impact on the result (because blending a fragment **A** and then a fragment **B** is different than blending **B** then **A**).
The fact that only the fragment with the lowest depth is visible is **not guaranteed** when fragments have **opacity values that are neither 0 or 1** (and alpha-blending is used). Even worse; the order in which fragments are emitted has an impact on the result (because blending a fragment **A** and then a fragment **B** is different than blending **B** then **A**).
Long story short: **transparent objects are always a bit tricky** to handle in a Z-buffer pipeline. A simple solution is to limit the number of transparent objects, and dynamically sort them wrt. their distance to the view point. More advanced schemes exist such as [Order-independent transparency](https://en.wikipedia.org/wiki/Order-independent_transparency) techniques.
```
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/3d-meshes/projection-matrices.md
Original file line number Diff line number Diff line change
Expand Up @@ -647,7 +647,7 @@ l & = \frac{1}{\tan(\alpha/2)} = \cot\frac{\alpha}{2} \\
\end{align}
$$

Most probably you will use either fov or focal length and stick to it so there will be no need for conversion! We can still check that our formula gives again the same result:
Most probably you will use either fov or focal length and stick to it so there will be no need for conversion! We can always still verify that our formula gives the same result:

```C++
float fov = 2 * glm::atan(1 / focalLength);
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/3d-meshes/transformation-matrices.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ position = M * vec4f(position, 1.0);
Mathematically, the code above makes sense: a non-square 3x4 matrix takes an input vector of size 4 and returns an output of size 3. However, **WGSL only supports square matrices** (and so do other shading languages).
```

There would anyway be only little use of non-square matrices, because this prevents us from **chaining transforms**. Instead of returning a vector $(x, y, z)$, we would rather return the vector $(x, y, z, 1.0)$ so that we may apply again another transform. This should be easy:
There would be little use of non-square matrices anyway, because this prevents us from **chaining transforms**. Instead of returning a vector $(x, y, z)$, we would rather return the vector $(x, y, z, 1.0)$ so that we may again apply yet another transform. This should be easy:

```rust
// Option A
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/input-geometry/index-buffer.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ std::vector<uint16_t> indexData = {
Using the index buffer adds an **overhead** of `6 * sizeof(uint16_t)` = 12 bytes **but also saves** `2 * 5 * sizeof(float)` = 40 bytes, so even on this very simple example it is worth using.
````

This split of data reorganizes a bit our buffer initialization method:
This split of data reorganizes our buffer initialization method:

```{lit} C++, InitializeBuffers method (replace, also for tangle root "Vanilla")
void Application::InitializeBuffers() {
Expand Down
8 changes: 4 additions & 4 deletions basic-3d-rendering/input-geometry/multiple-attributes.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ fn vs_main(@location(0) in_position: vec2f, @location(1) in_color: vec3f) -> /*
}
```

This works, but when the **number of input attribute grows**, we will prefer take instead a **single argument** whose type is a **custom struct** labeled with locations:
This works, but when the **number of input attribute grows**, we will prefer to instead take a **single argument** whose type is a **custom struct** labeled with locations:

```{lit} rust, Define VertexInput struct (also for tangle root "Vanilla")
/**
Expand Down Expand Up @@ -84,7 +84,7 @@ const char* shaderSource = R"(
Nope. The vertex attributes are only provided to the vertex shader. However, **the fragment shader can receive whatever the vertex shader returns!** This is where the structure-based approach becomes handy.

First of all, we change again the signature of `vs_main` to **return a custom struct** (instead of `@builtin(position) vec4f`):
First of all, we once again change the signature of `vs_main` to **return a custom struct** (instead of `@builtin(position) vec4f`):

```{lit} rust, Vertex shader (replace, also for tangle root "Vanilla")
fn vs_main(in: VertexInput) -> VertexOutput {
Expand Down Expand Up @@ -125,7 +125,7 @@ fn fs_main(@location(0) color: vec3f) -> @location(0) vec4f {
}
```

Or we can use a custom struct whose fields are labeled... like the `VertexOutput` itself. It could be a different one, as long as we stay consistent regarding `@location` indices.
Or, we can use a custom struct whose fields are labeled... like the `VertexOutput` itself! It could be a different one, as long as we stay consistent regarding `@location` indices.

```{lit} rust, Fragment shader (replace, also for tangle root "Vanilla")
// Or we can use a custom struct whose fields are labeled
Expand Down Expand Up @@ -263,7 +263,7 @@ vertexBufferLayout.attributes = vertexAttribs.data();
```
````
The first thing we can remark is that now the **byte stride** of our position attribute $(x,y)$ has changed from `2 * sizeof(float)` to `5 * sizeof(float)`:
The first thing to remark on is that now the **byte stride** of our position attribute $(x,y)$ has changed from `2 * sizeof(float)` to `5 * sizeof(float)`:
````{tab} With webgpu.hpp
```{lit} C++, Describe buffer stride and step mode (replace)
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/lighting-and-material/specular.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ requiredLimits.limits.maxInterStageShaderComponents = 11;
````

```{caution}
We normalize `V` **in the fragment shader** because even if all `out.viewDirection` are normalized, their interpolation after the rasterization is in general no longer perfectly normalized.
We normalize `V` **in the fragment shader** because even if all `out.viewDirection` values are normalized, their interpolation after the rasterization is in general no longer perfectly normalized.
```

So how do we compute the `viewDirection` in the vertex exactly? We can split the line that populates `out.position` in order to get the **world space coordinates** of the current vertex, prior to projecting it:
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/shader-uniforms/a-first-uniform.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ requiredLimits.limits.maxUniformBufferBindingSize = 16 * 4;
Shader side
-----------

In order to animate our scene, we create a uniform called `uTime` that we update each frame with the current time, expressed in second (as provided by `glfwGetTime()`).
In order to animate our scene, we create a uniform called `uTime` that we update each frame with the current time, expressed in seconds (as provided by `glfwGetTime()`).

```{note}
I usually **prefix** uniform variables with a 'u' so that it is easy to figure out when reading a long shader **when a variable is a uniform** rather than a local variable.
Expand Down
6 changes: 3 additions & 3 deletions basic-3d-rendering/some-interaction/camera-control.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ struct CameraState {
};
```
We add such a state to our `Application` class.
We then add this struct as a state in our `Application` class.
```C++
// In the declaration of class Application
Expand Down Expand Up @@ -211,7 +211,7 @@ struct DragState {
// Inertia
vec2 velocity = {0.0, 0.0};
vec2 previousDelta;
float intertia = 0.9f;
float inertia = 0.9f;
};
```

Expand Down Expand Up @@ -257,7 +257,7 @@ void Application::updateDragInertia() {
m_cameraState.angles.y = glm::clamp(m_cameraState.angles.y, -PI / 2 + 1e-5f, PI / 2 - 1e-5f);
// Dampen the velocity so that it decreases exponentially and stops
// after a few frames.
m_drag.velocity *= m_drag.intertia;
m_drag.velocity *= m_drag.inertia;
updateViewMatrix();
}
}
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/some-interaction/resizing-window.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ bool Application::initWindowAndDevice() {
```{important}
I did not show the lambda version right away because it is slightly misleading: **it is tempting** to use the **capturing context** of the lambda (the `[]` before the lambda's arguments) to provide `this` to the callback.
However, **only non-capturing lambdas** may be casted to the raw function pointer that GLFW expects for a callback. This remark goes for any C API by the way.
However, **only non-capturing lambdas** may be cast to the raw function pointer that GLFW expects for a callback. In fact, this is true for any C API.
```

Resize event handler
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/texturing/loading-from-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ static void writeMipMaps(
Before dealing with mip maps, we'd like to test our `loadTexture` for mip level 0. But for this we still miss one part: the **texture view**.
To create the texture view that is used by the sampler, we need the **mip level count** and **format**. We can modify the `loadTexture` either to return these information, or as I do here to create an appropriate texture view and return it.
To create the texture view that is used by the sampler, we need the **mip level count** and **format**. We can modify the `loadTexture` either to return this information or, as in this example, have it create an appropriate texture view and return it.
This is made **optional** by passing the returned view by a pointer, that is ignored if null.
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/texturing/sampler.md
Original file line number Diff line number Diff line change
Expand Up @@ -494,7 +494,7 @@ for (uint32_t i = 0; i < mipLevelSize.width; ++i) {
You should now see a **gradient** depending on the distance of the points to the camera. Each color of the gradient corresponds to texels sampled from a different mip level.
Again, **the sampler automatically figures out** which level to sample. It does so based on the difference of UV coordinate between two neighbor pixels.
Again, **the sampler automatically figures out** which level to sample. It does so based on the difference of UV coordinate between two neighboring pixels.
```{image} /images/min-pyramid-light.svg
:align: center
Expand Down
2 changes: 1 addition & 1 deletion basic-3d-rendering/texturing/texture-mapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ fn fs_main(in: VertexOutput) -> @location(0) vec4f {
```

```{important}
It is important that the conversion to integers (`vec2i`) is done in the fragment shader rather than in the vertex shader, because integer vertex output do not get interpolated by the rasterizer.
It is important that the conversion to integers (`vec2i`) is done in the fragment shader rather than in the vertex shader, because integer vertex output does not get interpolated by the rasterizer.
```

````{note}
Expand Down

0 comments on commit 3c28d65

Please sign in to comment.