Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review of Functional Programming section #148

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
name: Higher Order Functions
name: Higher-Order Functions
dependsOn: [software_architecture_and_design.functional.side_effects_cpp]
tags: [cpp]
attribution:
Expand Down Expand Up @@ -36,7 +36,7 @@ auto hello_world = []() {
hello_world();
```

The `auto`{.Cpp} keyword allows the compiler to determine the correct type for
The `auto` keyword allows the compiler to determine the correct type for
the lambda, rather than you declaring it manually (impossible for lambda
functions). You can call or execute a lambda as you would any other function.

Expand Down Expand Up @@ -145,13 +145,13 @@ std::cout << result << std::end; // prints 6

`std::function` is an example of _type erasure_.

## Higher Order Functions
## Higher-Order Functions

One of the main uses of lambda functions is to create temporary functions to
pass into higher order functions. A higher order function is simply a function
that has other functions as one of its arguments.
Higher-order functions are functions that take another function as an argument
or that return a function. One of the main uses of lambda functions is to create
temporary functions to pass into higher-order functions.

To illustrate the benefits of higher order functions, let us define two
To illustrate the benefits of higher-order functions, let us define two
functions, one that calculates the sum of a `std::vector<int>`, the other
which calculates the maximum value the same vector type.

Expand All @@ -173,8 +173,8 @@ int maximum(const std::vector<int>& data) {
```

We notice that these are really exactly the same algorithm, except that we
change the binary operation done on the rhs of the statement in the loop, we
therefore decide to combine these functions into one higher order function.
change the binary operation done on the RHS of the statement in the loop, we
therefore decide to combine these functions into one higher-order function.

```cpp
int reduce(const std::vector<int>& data, std::function<int(int, int)> bin_op) {
Expand Down Expand Up @@ -202,7 +202,7 @@ C++ actually has a `std::reduce`, which is part of the _algorithms_ standard lib
### The Algorithms Library

The [algorithms library](https://en.cppreference.com/w/cpp/algorithm) is a
collection of higher order functions implementing many common algorithms. These
collection of higher-order functions implementing many common algorithms. These
are typically algorithms that you write over and over again, often without
recognising their conceptual similarities. Using the algorithms library means:

Expand Down Expand Up @@ -267,9 +267,10 @@ Notice here we are breaking out the inner algorithm of determining if an `int` i
or not, from the outer algorithm of looping through a collection of numbers and
filtering them according to a function (as opposed to writing them together in a
standard loop). This division makes each algorithm clearer, and we also have a nice
self-contained `is_prime` function we can potentially reuse.
self-contained `is_prime` function we can potentially reuse. (Though in practice you
might want to use a more efficient algorithm for testing primality.)

Finally, the reduce, or `std::reduce`, which we will use to calculate the min and
Finally, the reduce, or `std::reduce`, which we will use to calculate the minimum and
maximum elements of an vector. At the same time we introduce another algorithm
`std::generate`, which assigns values to a range based on a generator function, and some
of the random number generation options in the standard library.
Expand All @@ -289,7 +290,7 @@ int main() {
std::random_device rd;
std::mt19937 gen(rd());
std::normal_distribution<double> dist(5, 2);
auto gen_random = [&]() { return dist(gen);};
auto gen_random = [&]() { return dist(gen); };

std::vector<double> data(1000);
std::generate(data.begin(), data.end(), gen_random);
Expand All @@ -300,14 +301,14 @@ int main() {
max = std::max(max, x);
return std::make_tuple(min, max);
};
auto [min, max] = std::accumulate(data.begin(), data.end(), std::make_tuple(0., 0.), calc_min_max);
auto [min, max] = std::reduce(std::next(data.begin()), data.end(), std::make_tuple(data[0], data[0]), calc_min_max);
std::cout << "min is "<< min << " max is "<< max << std::endl;
}
```

::::challenge{id=sum_squares title="Sum of Squares"}

Use `std::accumulate` to write a function that calculates the sum of the squares of the values in a vector.
Use `std::reduce` to write a function that calculates the sum of the squares of the values in a vector.
Your function should behave as below:

```cpp
Expand All @@ -330,7 +331,7 @@ std::cout << sum_of_squares({1, 3, -2}) << std::endl;

int sum_of_squares(const std::vector<int>& data) {
auto sum_squares = [](int sum, int x) { return sum + std::pow(x, 2); };
return std::accumulate(data.begin(), data.end(), 0, sum_squares);
return std::reduce(data.begin(), data.end(), 0, sum_squares);
}
```

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
name: Higher Order Functions
name: Higher-Order Functions
dependsOn: [software_architecture_and_design.functional.side_effects_python]
tags: [python]
attribution:
Expand Down Expand Up @@ -67,13 +67,13 @@ For example, see [Lambda Expressions](https://en.cppreference.com/w/cpp/language
Finally, there's another common use case of lambda functions that we'll come back to later when we see **closures**.
Due to their simplicity, it can be useful to have a lamdba function as the inner function in a closure.

## Higher Order Functions
## Higher-Order Functions

One of the main uses of lambda functions is to create temporary functions to
pass into higher order functions. A higher order function is simply a function
that has other functions as one of its arguments.
Higher-order functions are functions that take another function as an argument
or that return a function. One of the main uses of lambda functions is to create
temporary functions to pass into higher-order functions.

To illustrate the benifits of higher order functions, let us define two
To illustrate the benefits of higher-order functions, let us define two
functions, one that calculates the sum of a list of values, the other
which calculates the maximum value of the list.

Expand All @@ -92,8 +92,8 @@ def maximum(data):
```

We notice that these are really exactly the same algorithm, except that we
change the binary operation done on the rhs of the statement in the loop, we
therefore decide to combine these functions into one higher order function.
change the binary operation done on the RHS of the statement in the loop, we
therefore decide to combine these functions into one higher-order function.

```python
def reduce(data, bin_op):
Expand All @@ -120,9 +120,17 @@ print(reduce(data, lambda a, b: min(a, b)))
Excellent! We have reduced the amount of code we need to write, reducing the
number of possible bugs and making the code easier to maintain in the future.

Notice, though, that `max` and `min` are already binary functions, so we can
pass them directly to `reduce` without having to wrap them in lambdas:

```python
print(reduce(data, max))
print(reduce(data, min))
```

## Map, Filter, Reduce

Python has a number of higher order functions built in, including `map`,
Python has a number of higher-order functions built in, including `map`,
`filter` and `reduce`. Note that the `map` and `filter` functions in Python use
**lazy evaluation**. This means that values in an iterable collection are not
actually calculated until you need them. We'll explain some of the implications
Expand All @@ -143,7 +151,7 @@ l = [1, 2, 3]
def add_one(x):
return x + 1

# Returns a <map object> so need to cast to list
# Returns a <map object> so need to convert to list
print(list(map(add_one, l)))
print(list(map(lambda x: x + 1, l)))
```
Expand All @@ -161,7 +169,7 @@ l = [1, 2, 3]
def is_gt_one(x):
return x > 1

# Returns a <filter object> so need to cast to list
# Returns a <filter object> so need to convert to list
print(list(filter(is_gt_one, l)))
print(list(filter(lambda x: x > 1, l)))
```
Expand Down Expand Up @@ -382,20 +390,19 @@ print({i: 2 * i for i in range(5)})

## Why No Tuple Comprehension

Raymond Hettinger, one of the Python core developers, said in 2013:
Raymond Hettinger, one of the Python core developers, [said in 2013](https://x.com/raymondh/status/324664257004322817):

```text
Generally, lists are for looping; tuples for structs. Lists are homogeneous; tuples heterogeneous. Lists for variable length.
```
> Generally, lists are for looping; tuples for structs. Lists are homogeneous; tuples heterogeneous. Lists for variable length.

Since tuples aren't intended to represent sequences, there's no need for them to have a comprehension structure.
:::

## Generators

**Generator expressions** look exactly like you might expect a tuple
*comprehension (which don't exist) to look, and behaves a little differently
*from the other comprehensions.
comprehension (which don't exist) to look, and behaves a little differently
from the other comprehensions.

What happens if we try to use them in the same was as we did list
comprehensions?

Expand All @@ -417,7 +424,7 @@ for i in (2 * i for i in range(5)):

## Decorators

Decorators are higher order functions that take a function as an argument, modify it, and return it.
Decorators are higher-order functions that take a function as an argument, modify it, and return it.

Let's look at the following code for ways on how to "decorate" functions.

Expand Down Expand Up @@ -550,8 +557,8 @@ Took 0.124199753 seconds

- _First-Class Functions_: functions that can be passed as arguments to other functions, returned from functions, or assigned to variables.
- _Lambda Functions_: small, nameless functions defined in the normal flow of the program with a keyword lambda.
- _Higher-Order Functions_: a function that has other functions as one of its arguments.
- _Map, Filter and Reduce_: built-in higher order functions in Python that use lazy evaluation.
- _Higher-Order Functions_: a function that has other functions as one of its arguments or that returns another function.
- _Map, Filter and Reduce_: built-in higher-order functions in Python that use lazy evaluation.
- _Comprehensions_: a more Pythonic way to structure map and filter operations.
- _Generators_: similar to list comprehensions, but behave differently and not evaluated until you iterate over them.
- _Decorators_: higher-order functions that take a function as an argument, modify it, and return it.
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,11 @@ def factorial(n):
return n * factorial(n-1) # recursive call to the same function
```

Note that Python is limited in the depth of recursion it supports. If you
call `factorial(1000)`, for example, you will get a `RecursionError`. In
cases, where you might exceed this limit, it's better to stick to iteration.
Still, as we're about to see, recursion is sometimes the best solution.

::::challenge{id="recursion_on_trees" title="Recursion on trees"}

Recursion is a powerful tool for traversing tree data structures. Consider a
Expand Down
19 changes: 7 additions & 12 deletions software_architecture_and_design/functional/side_effects_python.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ my_cool_function(x, y)
Now imagine we have a global variable defined elsewhere that is updated by
`my_cool_function`, this is not even passed into the function so it is even
more unclear that this is being updated. The global variable and function might
even be declared in a separate file and brought in via an `import`
even be declared in a separate file and brought in via an `import`.

```python
z = 3
Expand Down Expand Up @@ -85,10 +85,9 @@ line = myfile.readline() # Same call to readline, but result is different!

The main downside of having a state that is constantly updated is that it makes
it harder for us to _reason_ about our code, to work out what it is doing.
However, the upside is that we can use state to store temporary data to make
calculations more efficient and store temporary data. For example an iteration
loop that keeps track of a running total is a common pattern in procedural
programming:
However, the upside is that we can use state to store temporary data and make
calculations more efficient. For example an iteration loop that keeps track of
a running total is a common pattern in procedural programming:

```python nolint
result = 0
Expand All @@ -98,8 +97,8 @@ for x in data:

## Side Effects and Pure Functions

By considering how we use state in our programs, we can improve our programming by
making it more predictable, reliable, and testable. One way to achieve this is by
By considering how we use state in our programs, we can make our programming
more predictable, reliable, and testable. One way to achieve this is by
adopting functional programming principles, which promote the use of pure functions that
do not modify any external state and rely only on their input parameters to produce
their output. Pure functions are easier to reason about and test, and they enable
Expand Down Expand Up @@ -160,7 +159,7 @@ refactor a Python program that implements Conway's Game of Life. The basic rules
2. Any live cell with two or three live neighbors lives on to the next generation.
3. Any live cell with more than three live neighbors dies, as if by overpopulation.

The code has two bugs, one related to the improper management of the program
The code has a bug related to the improper management of the program
state, which you will fix. Refactor the code so that the `step`
function is a pure function.

Expand All @@ -187,7 +186,6 @@ def get_neighbors(grid, i, j):
(i+1, j-1), (i+1, j), (i+1, j+1)])
valid_indices = (indices[:, 0] >= 0) & (indices[:, 0] < rows) & \
(indices[:, 1] >= 0) & (indices[:, 1] < cols)
valid_indices[4] = False # exclude current cell
return grid[indices[valid_indices][:, 0], indices[valid_indices][:, 1]]

# Test
Expand All @@ -213,9 +211,6 @@ design often involves trade-offs like this, if efficiency is important we can
sacrifice purity, but if we want to be able to reason about our code and test it
easily, we should strive for purity.

The other bug is that we didn't actually include the current cell in
`valid_indices`, so we need to remove the line that excludes it.

```python
import numpy as np

Expand Down
Loading