Skip to content

Commit

Permalink
update original
Browse files Browse the repository at this point in the history
  • Loading branch information
funkill committed Jan 22, 2025
1 parent 45bc4db commit 7119ae1
Show file tree
Hide file tree
Showing 11 changed files with 1,067 additions and 1,003 deletions.
3 changes: 3 additions & 0 deletions rustbook-en/ci/dictionary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -253,6 +253,7 @@ interoperate
IntoFuture
IntoIterator
intra
intratask
InvalidDigit
invariants
ioerror
Expand Down Expand Up @@ -360,6 +361,7 @@ nondeterministic
nonequality
nongeneric
noplayground
NoStarch
NotFound
nsprust
null's
Expand Down Expand Up @@ -523,6 +525,7 @@ suboptimal
subpath
subslices
substring
subtasks
subteams
subtree
subtyping
Expand Down
8 changes: 4 additions & 4 deletions rustbook-en/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,12 +101,12 @@
- [Shared-State Concurrency](ch16-03-shared-state.md)
- [Extensible Concurrency with the `Sync` and `Send` Traits](ch16-04-extensible-concurrency-sync-and-send.md)

- [Async and Await](ch17-00-async-await.md)
- [Fundamentals of Asynchronous Programming: Async, Await, Futures, and Streams](ch17-00-async-await.md)
- [Futures and the Async Syntax](ch17-01-futures-and-syntax.md)
- [Concurrency With Async](ch17-02-concurrency-with-async.md)
- [Applying Concurrency with Async](ch17-02-concurrency-with-async.md)
- [Working With Any Number of Futures](ch17-03-more-futures.md)
- [Streams](ch17-04-streams.md)
- [Digging Into the Traits for Async](ch17-05-traits-for-async.md)
- [Streams: Futures in Sequence](ch17-04-streams.md)
- [A Closer Look at the Traits for Async](ch17-05-traits-for-async.md)
- [Futures, Tasks, and Threads](ch17-06-futures-tasks-threads.md)

- [Object Oriented Programming Features of Rust](ch18-00-oop.md)
Expand Down
4 changes: 1 addition & 3 deletions rustbook-en/src/appendix-01-keywords.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,7 @@ Rust for potential future use.
- `box`
- `do`
- `final`

* `gen`

- `gen`
- `macro`
- `override`
- `priv`
Expand Down
194 changes: 102 additions & 92 deletions rustbook-en/src/ch17-00-async-await.md
Original file line number Diff line number Diff line change
@@ -1,122 +1,139 @@
# Async and Await

Many operations we ask the computer to do can take a while to finish. For
example, if you used a video editor to create a video of a family celebration,
exporting it could take anywhere from minutes to hours. Similarly, downloading a
video shared by someone in your family might take a long time. It would be nice
if we could do something else while we are waiting for those long-running
processes to complete.

The video export will use as much CPU and GPU power as it can. If you only had
one CPU core, and your operating system never paused that export until it
completed, you couldn’t do anything else on your computer while it was running.
That would be a pretty frustrating experience, though. Instead, your computer’s
operating system can—and does!—invisibly interrupt the export often enough to
let you get other work done along the way.

The file download is different. It does not take up very much CPU time. Instead,
the CPU needs to wait on data to arrive from the network. While you can start
reading the data once some of it is present, it might take a while for the rest
to show up. Even once the data is all present, a video can be quite large, so it
might take some time to load it all. Maybe it only takes a second or two—but
that’s a very long time for a modern processor, which can do billions of
operations every second. It would be nice to be able to put the CPU to use for
other work while waiting for the network call to finish—so, again, your
operating system will invisibly interrupt your program so other things can
happen while the network operation is still ongoing.

> Note: The video export is the kind of operation which is often described as
> “CPU-bound” or “compute-bound”. It’s limited by the speed of the computer’s
> ability to process data within the _CPU_ or _GPU_, and how much of that speed
> it can use. The video download is the kind of operation which is often
> described as “IO-bound,” because it’s limited by the speed of the computer’s
> _input and output_. It can only go as fast as the data can be sent across the
> network.
# Fundamentals of Asynchronous Programming: Async, Await, Futures, and Streams

Many operations we ask the computer to do can take a while to finish. It would
be nice if we could do something else while we are waiting for those
long-running processes to complete. Modern computers offer two techniques for
working on more than one operation at a time: parallelism and concurrency. Once
we start writing programs that involve parallel or concurrent operations,
though, we quickly encounter new challenges inherent to *asynchronous
programming*, where operations may not finish sequentially in the order they
were started. This chapter builds on Chapter 16’s use of threads for parallelism
and concurrency by introducing an alternative approach to asynchronous
programming: Rust’s Futures, Streams, the `async` and `await` syntax that
supports them, and the tools for managing and coordinating between asynchronous
operations.

Let’s consider an example. Say you’re exporting a video you’ve created of a
family celebration, an operation that could take anywhere from minutes to hours.
The video export will use as much CPU and GPU power as it can. If you had only
one CPU core and your operating system didn’t pause that export until it
completed—that is, if it executed the export _synchronously_—you couldn’t do
anything else on your computer while that task was running. That would be a
pretty frustrating experience. Fortunately, your computer’s operating system
can, and does, invisibly interrupt the export often enough to let you get other
work done simultaneously.

Now say you’re downloading a video shared by someone else, which can also take a
while but does not take up as much CPU time. In this case, the CPU has to wait
for data to arrive from the network. While you can start reading the data once
it starts to arrive, it might take some time for all of it to show up. Even once
the data is all present, if the video is quite large, it could take at least a
second or two to load it all. That might not sound like much, but it’s a very
long time for a modern processor, which can perform billions of operations every
second. Again, your operating system will invisibly interrupt your program to
allow the CPU to perform other work while waiting for the network call to
finish.

The video export is an example of a _CPU-bound_ or _compute-bound_ operation.
It’s limited by the computer’s potential data processing speed within the CPU or
GPU, and how much of that speed it can dedicate to the operation. The video
download is an example of an _IO-bound_ operation, because it’s limited by the
speed of the computer’s _input and output_; it can only go as fast as the data
can be sent across the network.

In both of these examples, the operating system’s invisible interrupts provide a
form of concurrency. That concurrency only happens at the level of a whole
form of concurrency. That concurrency happens only at the level of the entire
program, though: the operating system interrupts one program to let other
programs get work done. In many cases, because we understand our programs at a
much more granular level than the operating system does, we can spot lots of
opportunities for concurrency that the operating system cannot see.
much more granular level than the operating system does, we can spot
opportunities for concurrency that the operating system can’t see.

For example, if we’re building a tool to manage file downloads, we should be
able to write our program in such a way that starting one download does not lock
up the UI, and users should be able to start multiple downloads at the same
time. Many operating system APIs for interacting with the network are
_blocking_, though. That is, these APIs block the program’s progress until the
data that they are processing is completely ready.

> Note: This is how _most_ function calls work, if you think about it! However,
> we normally reserve the term “blocking” for function calls which interact with
able to write our program so that starting one download won’t lock up the UI,
and users should be able to start multiple downloads at the same time. Many
operating system APIs for interacting with the network are _blocking_, though;
that is, they block the program’s progress until the data they’re processing is
completely ready.

> Note: This is how _most_ function calls work, if you think about it. However,
> the term _blocking_ is usually reserved for function calls that interact with
> files, the network, or other resources on the computer, because those are the
> places where an individual program would benefit from the operation being
> cases where an individual program would benefit from the operation being
> _non_-blocking.
We could avoid blocking our main thread by spawning a dedicated thread to
download each file. However, we would eventually find that the overhead of those
threads was a problem. It would also be nicer if the call were not blocking in
the first place. Last but not least, it would be better if we could write in the
same direct style we use in blocking code. Something similar to this:
download each file. However, the overhead of those threads would eventually
become a problem. It would be preferable if the call didn’t block in the first
place. It would also be better if we could write in the same direct style we use
in blocking code, similar to this:

```rust,ignore,does_not_compile
let data = fetch_data_from(url).await;
println!("{data}");
```

That is exactly what Rust’s async abstraction gives us. Before we see how this
works in practice, though, we need to take a short detour into the differences
between parallelism and concurrency.
That is exactly what Rust’s _async_ (short for _asynchronous_) abstraction gives
us. In this chapter, you’ll learn all about async as we cover the following
topics:

- How to use Rust’s `async` and `await` syntax
- How to use the async model to solve some of the same challenges we looked at
in Chapter 16
- How multithreading and async provide complementary solutions, that you can
combine in many cases

Before we see how async works in practice, though, we need to take a short
detour to discuss the differences between parallelism and concurrency.

### Parallelism and Concurrency

In the previous chapter, we treated parallelism and concurrency as mostly
interchangeable. Now we need to distinguish between them more precisely, because
the differences will show up as we start working.
We’ve treated parallelism and concurrency as mostly interchangeable so far. Now
we need to distinguish between them more precisely, because the differences will
show up as we start working.

Consider the different ways a team could split up work on a software project. We
could assign a single individual multiple tasks, or we could assign one task per
team member, or we could do a mix of both approaches.
Consider the different ways a team could split up work on a software project.
You could assign a single member multiple tasks, assign each member one task, or
use a mix of the two approaches.

When an individual works on several different tasks before any of them is
complete, this is _concurrency_. Maybe you have two different projects checked
out on your computer, and when you get bored or stuck on one project, you switch
to the other. You’re just one person, so you can’t make progress on both tasks
at the exact same timebut you can multi-task, making progress on multiple
tasks by switching between them.
at the exact same time, but you can multi-task, making progress on one at a time
by switching between them (see Figure 17-1).

<figure>

<img alt="Concurrent work flow" src="img/trpl17-01.svg" class="center" />
<img src="img/trpl17-01.svg" class="center" alt="A diagram with boxes labeled Task A and Task B, with diamonds in them representing subtasks. There are arrows pointing from A1 to B1, B1 to A2, A2 to B2, B2 to A3, A3 to A4, and A4 to B3. The arrows between the subtasks cross the boxes between Task A and Task B." />

<figcaption>Figure 17-1: A concurrent workflow, switching between Task A and Task B.</figcaption>
<figcaption>Figure 17-1: A concurrent workflow, switching between Task A and Task B</figcaption>

</figure>

When you agree to split up a group of tasks between the people on the team, with
each person taking one task and working on it alone, this is _parallelism_. Each
person on the team can make progress at the exact same time.
When the team splits up a group of tasks by having each member take one task and
work on it alone, this is _parallelism_. Each person on the team can make
progress at the exact same time (see Figure 17-2).

<figure>

<img alt="Concurrent work flow" src="img/trpl17-02.svg" class="center" />
<img src="img/trpl17-02.svg" class="center" alt="A diagram with boxes labeled Task A and Task B, with diamonds in them representing subtasks. There are arrows pointing from A1 to A2, A2 to A3, A3 to A4, B1 to B2, and B2 to B3. No arrows cross between the boxes for Task A and Task B." />

<figcaption>Figure 17-2: A parallel workflow, where work happens on Task A and Task B independently.</figcaption>
<figcaption>Figure 17-2: A parallel workflow, where work happens on Task A and Task B independently</figcaption>

</figure>

With both of these situations, you might have to coordinate between different
tasks. Maybe you _thought_ the task that one person was working on was totally
independent from everyone else’s work, but it actually needs something finished
by another person on the team. Some of the work could be done in parallel, but
some of it was actually _serial_: it could only happen in a series, one thing
after the other, as in Figure 17-3.
In both of these workflows, you might have to coordinate between different
tasks. Maybe you _thought_ the task assigned to one person was totally
independent from everyone else’s work, but it actually requires another person
on the team to finish their task first. Some of the work could be done in
parallel, but some of it was actually _serial_: it could only happen in a
series, one task after the other, as in Figure 17-3.

<figure>

<img alt="Concurrent work flow" src="img/trpl17-03.svg" class="center" />
<img src="img/trpl17-03.svg" class="center" alt="A diagram with boxes labeled Task A and Task B, with diamonds in them representing subtasks. There are arrows pointing from A1 to A2, A2 to a pair of thick vertical lines like a “pause” symbol, from that symbol to A3, B1 to B2, B2 to B3, which is below that symbol, B3 to A3, and B3 to B4." />

<figcaption>Figure 17-3: A partially parallel workflow, where work happens on Task A and Task B independently until task A3 is blocked on the results of task B3.</figcaption>
<figcaption>Figure 17-3: A partially parallel workflow, where work happens on Task A and Task B independently until Task A3 is blocked on the results of Task B3.</figcaption>

</figure>

Expand All @@ -130,24 +147,17 @@ coworker are no longer able to work in parallel, and you’re also no longer abl
to work concurrently on your own tasks.

The same basic dynamics come into play with software and hardware. On a machine
with a single CPU core, the CPU can only do one operation at a time, but it can
still work concurrently. Using tools such as threads, processes, and async, the
computer can pause one activity and switch to others before eventually cycling
back to that first activity again. On a machine with multiple CPU cores, it can
also do work in parallel. One core can be doing one thing while another core
does something completely unrelated, and those actually happen at the same
time.
with a single CPU core, the CPU can perform only one operation at a time, but it
can still work concurrently. Using tools such as threads, processes, and async,
the computer can pause one activity and switch to others before eventually
cycling back to that first activity again. On a machine with multiple CPU cores,
it can also do work in parallel. One core can be performing one task while
another core performs a completely unrelated one, and those operations actually
happen at the same time.

When working with async in Rust, we’re always dealing with concurrency.
Depending on the hardware, the operating system, and the async runtime we are
usingmore on async runtimes shortly!—that concurrency may also use parallelism
using (more on async runtimes shortly), that concurrency may also use parallelism
under the hood.

Now, let’s dive into how async programming in Rust actually works! In the rest
of this chapter, we will:

- see how to use Rust’s `async` and `await` syntax
- explore how to use the async model to solve some of the same challenges we
looked at in Chapter 16
- look at how multithreading and async provide complementary solutions, which
you can even use together in many cases
Now, let’s dive into how async programming in Rust actually works.
Loading

0 comments on commit 7119ae1

Please sign in to comment.