Gradient of a log-linear function with input (features), binary output value (gold label), and logistic loss
Note: This repository re-uses our code from Exercise 3, such as the ScalarNode and efficient implementation of backpropagation with caching.
We renamed the unit test file test_nodes.py
from Exercise 3 to test_tasks_ex3.py
. These unit tests must keep working!
Changes from ex03: I renamed arguments
in ScalarNode
to children
. We will use not only the arguments of a function (e.g.,
Implement a new scalar node (named ParameterNode) which is almost similar to a ConstantNode but its value can be changed.
Implement a linear function node (compute the output value)
Recall: Linear function
I extended ScalarNode
and created a LinearNode
. The arguments x_1, ... x_n
of LinearNode
will again be just a list of other nodes. However, we will also pass a list of parameters w_1, ... w_n
and b
which should be created using the ParameterNode
.
Implement the rest of the linear function node, namely the partial derivatives.
Implement a sigmoid node
Implement a per-example binary logistic loss (cross-entropy loss)
Implement updating the parameters by taking the step determined by the gradient
Will be included in the codebase in the next exercise.
Create a virtual environment
$ virtualenv venv
$ source venv/bin/activate
Run unittests from the command line
$ python -m unittest