Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple README updates #60

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ and engineering tips, tricks and best practices to build and train the neural ne
<a href="https://github.com/conversationai/conversationai-models/tree/master/attention-tutorial">code</a><br/><br/></div>
<p>Advanced RNN architectures for natural language processing. Word embeddings, text classification,
bidirectional models, sequence to sequence models for translation. Attention mechanisms. This session also explores
Tensorflow's powerful seq2seq API. Applications: toxic comment detection and langauge translation.
Tensorflow's powerful seq2seq API. Applications: toxic comment detection and language translation.
Co-author: Nithum Thain. Duration: 55 min</p></td>
</tr>
<tr>
Expand Down
2 changes: 1 addition & 1 deletion tensorflow-mnist-tutorial/README_BATCHNORM.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Tensorflow has both a low-level and a high-level implementation for batch normal
#### Low-level Tensorflow
The low-level tf.nn.batch_normalization function takes your inputs, subtracts the average and divides by the variance
that you pass in. It is up to you to compute both the batch statistics (average
and variance of neuron outputs across a batch) and their moving averages across multiple batches and use them apropriately at trainig and
and variance of neuron outputs across a batch) and their moving averages across multiple batches and use them appropriately at training and
test time. It is also up to you to compute your batch statistics
correctly depending on whether you are in a dense or a convolutional
layer. Sample code is available in [mnist_4.2_batchnorm_convolutional.py](mnist_4.2_batchnorm_convolutional.py)
Expand Down
2 changes: 1 addition & 1 deletion tensorflow-mnist-tutorial/mlengine/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ gcloud ml-engine local predict --model-dir checkpoints/export/Servo/XXXXX --json
You can read more about [batch norm here](../README_BATCHNORM.md).

If you want to experiment with TF Records, the standard Tensorflow
data format, you can run this script ((availble in the tensorflow distribution)
data format, you can run this script ((available in the tensorflow distribution)
to reformat the MNIST dataset into TF Records. It is not necessary for this sample though.

```bash
Expand Down