diff --git a/translations/es-ES/README.md b/translations/es-ES/README.md index ee00370..c4fd564 100644 --- a/translations/es-ES/README.md +++ b/translations/es-ES/README.md @@ -17,11 +17,11 @@ Secciones - **Python scripts and their tests**: existen algunos scripts python que representan prácticas simple sobre TensorFlow. Estos serán migrados prontamente para ordenarlos en una sección específica. - Experiencias explicadas en artículos Medium: - + - “custom_model_object_detection” donde se propone la generación de un modelo personalizado para la detección de jugadores de fútbol, en concreto el artículo muestra la experiencia con Lionel Messi. - - “tie_dominant_color” esta experiencia utiliza un modelo de object detection y recorta elementos para luego analizar su color y entregar opciones al desarrollador. + - “tie_dominant_color” esta experiencia utiliza un modelo de object detection y recorta elementos para luego analizar su color y entregar opciones al desarrollador. ## Lista de Idiomas - - [English](/README.md) - - [Español](/translations/es-ES/README.md) + - [English](/README.md) + - [Español](/translations/es-ES/README.md) \ No newline at end of file diff --git a/translations/it-IT/CODE_OF_CONDUCT.md b/translations/it-IT/CODE_OF_CONDUCT.md new file mode 100644 index 0000000..c7bec97 --- /dev/null +++ b/translations/it-IT/CODE_OF_CONDUCT.md @@ -0,0 +1,43 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment include: + +* Using welcoming and inclusive language +* Being respectful of differing viewpoints and experiences +* Gracefully accepting constructive criticism +* Focusing on what is best for the community +* Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +* The use of sexualized language or imagery and unwelcome sexual attention or advances +* Trolling, insulting/derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or electronic address, without explicit permission +* Other conduct which could reasonably be considered inappropriate in a professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at nbortolotti@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.4, available at [http://contributor-covenant.org/version/1/4](http://contributor-covenant.org/version/1/4/) \ No newline at end of file diff --git a/translations/it-IT/README.md b/translations/it-IT/README.md new file mode 100644 index 0000000..606172c --- /dev/null +++ b/translations/it-IT/README.md @@ -0,0 +1,27 @@ +# TensorFlow Experiences + +[![Build Status](https://travis-ci.org/nbortolotti/tensorflow-experiences.svg?branch=master)](https://travis-ci.org/nbortolotti/tensorflow-experiences) [![Crowdin](https://d322cqt584bo4o.cloudfront.net/tensorflow-experiences/localized.svg)](https://crowdin.com/project/tensorflow-experiences) [![Slack](https://img.shields.io/badge/slack--channel-green.svg?logo=slack&longCache=true)](http://tensorflowexperiences.slack.com/) + +## Overview + +Within this repository you will find several sections representing the development experiences in TensorFlow that I have experienced. + +Sections + +- **Colaboratory**: here you can find some experiences represented in Colaboratory, the simplicity and flexibility of the tool make the development of examples and tests directly in the cloud proposed by Google very attractive. + +- **Jupyter**: here you can find some examples in a pure Jupyter format, these examples often need some libraries and elements that are not totally compatible with Colaboratory and that require a traditional local environment. + +- **experiences**: these experiences are directly related in an ebook where I’m also including academic content on many of these concepts related to TensorFlow and Machine Learning. + +- **Python scripts and their tests**: there are some python scripts that represent simple practices on TensorFlow. These will be migrated promptly to order them in a specific section. + +- Experiences explained in Medium articles: + + - "custom_model_object_detection" where it is proposed, the generation of a personalized model for the detection of soccer players, specifically the article shows the experience with Lionel Messi. + - "tie_dominant_color" This experience uses an object detection model and trims elements to then analyze its color and deliver options to the developer. + +## Translations + + - [English](/README.md) + - [Español](/translations/es-ES/README.md) \ No newline at end of file diff --git a/translations/it-IT/colaboratory/README.md b/translations/it-IT/colaboratory/README.md new file mode 100644 index 0000000..de89733 --- /dev/null +++ b/translations/it-IT/colaboratory/README.md @@ -0,0 +1,6 @@ +Colaboratory Overview + +## Translations + +- [English](/colaboratory/README.md) +- [Español](/translations/es-ES/colaboratory/README.md) \ No newline at end of file diff --git a/translations/it-IT/colaboratory/exp_dinnerwithfriends_es.ipynb b/translations/it-IT/colaboratory/exp_dinnerwithfriends_es.ipynb new file mode 100644 index 0000000..a98cd67 --- /dev/null +++ b/translations/it-IT/colaboratory/exp_dinnerwithfriends_es.ipynb @@ -0,0 +1,254 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "smQWTwI7k4Bf" + }, + "source": [ + "# Paso 1\n", + "**Configuracion de Object Detection API**: en este paso, se descarga el modelo para la detección de objetos, también se realizan algunas copias y eliminaciones de referencia con el objetivo de dejar todo el ambiente configurado." + ] + }, + { + "cell_type": "code", + "execution_count": 0, + "metadata": { + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + }, + "colab_type": "code", + "id": "XnBVJiIzYune" + }, + "outputs": [], + "source": [ + "!git clone https://github.com/tensorflow/models.git\n", + "!apt-get -qq install libprotobuf-java protobuf-compiler\n", + "!protoc ./models/research/object_detection/protos/string_int_label_map.proto --python_out=.\n", + "!cp -R models/research/object_detection/ object_detection/\n", + "!rm -rf models" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "qwWt0kSihqCv" + }, + "source": [ + "# Paso 2\n", + "** Importaciones ** necesarias para ejecutar la demostración de Object Detection API" + ] + }, + { + "cell_type": "code", + "execution_count": 0, + "metadata": { + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + }, + "colab_type": "code", + "id": "YspILW_rZu0v" + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "import os\n", + "import six.moves.urllib as urllib\n", + "import sys\n", + "import tarfile\n", + "import tensorflow as tf\n", + "import zipfile\n", + "\n", + "from collections import defaultdict\n", + "from io import StringIO\n", + "from matplotlib import pyplot as plt\n", + "from PIL import Image\n", + "\n", + "from object_detection.utils import label_map_util\n", + "from object_detection.utils import visualization_utils as vis_util" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "kGx_08UcmtOF" + }, + "source": [ + "# Paso 3\n", + "** Configuración ** del modelo a utilizar, ruta al modelo pre-entrenado y elementos de configuración adicionales para la implementación de Object Detection API." + ] + }, + { + "cell_type": "code", + "execution_count": 0, + "metadata": { + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + }, + "colab_type": "code", + "id": "8n_alUkLZ1gl" + }, + "outputs": [], + "source": [ + "MODEL_NAME = 'faster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28'\n", + "MODEL_FILE = MODEL_NAME + '.tar.gz'\n", + "DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'\n", + "PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'\n", + "PATH_TO_LABELS = os.path.join('object_detection/data', 'mscoco_label_map.pbtxt')\n", + "NUM_CLASSES = 90\n", + "\n", + "opener = urllib.request.URLopener()\n", + "opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)\n", + "tar_file = tarfile.open(MODEL_FILE)\n", + "for file in tar_file.getmembers():\n", + " file_name = os.path.basename(file.name)\n", + " if 'frozen_inference_graph.pb' in file_name:\n", + " tar_file.extract(file, os.getcwd())\n", + " \n", + "detection_graph = tf.Graph()\n", + "with detection_graph.as_default():\n", + " od_graph_def = tf.GraphDef()\n", + " with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:\n", + " serialized_graph = fid.read()\n", + " od_graph_def.ParseFromString(serialized_graph)\n", + " tf.import_graph_def(od_graph_def, name='')\n", + " \n", + "label_map = label_map_util.load_labelmap(PATH_TO_LABELS)\n", + "categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)\n", + "category_index = label_map_util.create_category_index(categories)\n", + "\n", + "def load_image_into_numpy_array(image):\n", + " (im_width, im_height) = image.size\n", + " return np.array(image.getdata()).reshape(\n", + " (im_height, im_width, 3)).astype(np.uint8)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "PbXKPFiWh1jG" + }, + "source": [ + "# Paso 4\n", + "Sección con las imágenes de demostración" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!mkdir images\n", + "# esta url-imagen debería ser reemplazada por ustedes. este es solo el ejemplo almacenado en una capeta personal\n", + "!wget https://storage.googleapis.com/demostration_images/image.jpg -O images/image_1.jpg\n", + "\n", + "PATH_TO_TEST_IMAGES_DIR = 'images'\n", + "TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image_{}.jpg'.format(i)) for i in range(1, 2) ]\n", + "IMAGE_SIZE = (15, 11)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "_Vvi4-2fm2qe" + }, + "source": [ + "# Paso 5\n", + "Pieza de implementación que representa la detección concreta, llamando a la sesión TF" + ] + }, + { + "cell_type": "code", + "execution_count": 0, + "metadata": { + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + }, + "colab_type": "code", + "id": "q9FZsaZkaPUz" + }, + "outputs": [], + "source": [ + "with detection_graph.as_default():\n", + " with tf.Session(graph=detection_graph) as sess:\n", + " image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')\n", + " detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')\n", + " detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')\n", + " detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')\n", + " num_detections = detection_graph.get_tensor_by_name('num_detections:0')\n", + " for image_path in TEST_IMAGE_PATHS:\n", + " image = Image.open(image_path)\n", + " image_np = load_image_into_numpy_array(image)\n", + " image_np_expanded = np.expand_dims(image_np, axis=0)\n", + " (boxes, scores, classes, num) = sess.run(\n", + " [detection_boxes, detection_scores, detection_classes, num_detections],\n", + " feed_dict={image_tensor: image_np_expanded})\n", + " vis_util.visualize_boxes_and_labels_on_image_array(\n", + " image_np,\n", + " np.squeeze(boxes),\n", + " np.squeeze(classes).astype(np.int32),\n", + " np.squeeze(scores),\n", + " category_index,\n", + " use_normalized_coordinates=True,\n", + " line_thickness=3)\n", + " plt.figure(figsize=IMAGE_SIZE)\n", + " plt.imshow(image_np)" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "celltoolbar": "Edit Metadata", + "colab": { + "collapsed_sections": [], + "default_view": {}, + "name": "exp_dinnerwithfriends_es.ipynb", + "private_outputs": true, + "provenance": [ + { + "file_id": "1Bj6OJGSurV75btUArmTyMJj_BDH7t4YY", + "timestamp": 1517210004227 + } + ], + "version": "0.3.2", + "views": {} + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.3" + } + }, + "nbformat": 4, + "nbformat_minor": 1 +} diff --git a/translations/it-IT/colaboratory/exp_irisdataset_using_tfdata_tfkeras.ipynb b/translations/it-IT/colaboratory/exp_irisdataset_using_tfdata_tfkeras.ipynb new file mode 100644 index 0000000..ca5306a --- /dev/null +++ b/translations/it-IT/colaboratory/exp_irisdataset_using_tfdata_tfkeras.ipynb @@ -0,0 +1,552 @@ +{ + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "name": "[eager off final] experience with iris dataset using tf.data & tf.keras & tensorflow.ipynb", + "version": "0.3.2", + "views": {}, + "default_view": {}, + "provenance": [ + { + "file_id": "1yr7Fy-mF3_F-ooj7zo4mZQ80hvKlKnpq", + "timestamp": 1533299225807 + }, + { + "file_id": "1LmzXD9NZAD2y4G7AG0UZYcU_to2NCgKn", + "timestamp": 1531254092152 + }, + { + "file_id": "1JBUZh6LYjMpuwMY2pLbQIK8eSIvFNH75", + "timestamp": 1531171176568 + }, + { + "file_id": "18sc7Bg06f5HccZbP7fNm-2LC7yHgemFH", + "timestamp": 1531086293811 + }, + { + "file_id": "1p8rlad-1IGayARx-xbNfVWum_9SLR4YA", + "timestamp": 1530276569660 + } + ], + "private_outputs": true, + "collapsed_sections": [] + }, + "kernelspec": { + "name": "python2", + "display_name": "Python 2" + }, + "accelerator": "GPU" + }, + "cells": [ + { + "metadata": { + "id": "JatG0ZNHOBd8", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "# Experience with iris dataset using tf.keras & tensorflow" + ] + }, + { + "metadata": { + "id": "v0a-vcttI8f-", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "import tensorflow as tf\n", + "import pandas as pd\n", + "import numpy as np" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "ibZfNz2iRFIz", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "## Data download and dataset creation witout tf.data" + ] + }, + { + "metadata": { + "id": "sMgSBFkmu5rs", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "train_ds_url = \"http://download.tensorflow.org/data/iris_training.csv\"\n", + "test_ds_url = \"http://download.tensorflow.org/data/iris_test.csv\"\n", + "ds_columns = ['SepalLength', 'SepalWidth','PetalLength', 'PetalWidth', 'Plants']\n", + "species = np.array(['Setosa', 'Versicolor', 'Virginica'], dtype=np.object)" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "HZP9j6sJ1a6u", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "## Load data" + ] + }, + { + "metadata": { + "id": "T9UHLPUqubG9", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "categories='Plants'\n", + "\n", + "train_path = tf.keras.utils.get_file(train_ds_url.split('/')[-1], train_ds_url)\n", + "test_path = tf.keras.utils.get_file(test_ds_url.split('/')[-1], test_ds_url)\n", + " \n", + "train = pd.read_csv(train_path, names=ds_columns, header=0)\n", + "train_plantfeatures, train_categories = train, train.pop(categories)\n", + "\n", + "test = pd.read_csv(test_path, names=ds_columns, header=0)\n", + "test_plantfeatures, test_categories = test, test.pop(categories)" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "oy-yiFDiReOo", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "y_categorical = tf.contrib.keras.utils.to_categorical(train_categories, num_classes=3)\n", + "y_categorical_test = tf.contrib.keras.utils.to_categorical(test_categories, num_classes=3)" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "hXxeXsEIpRCT", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "## Build the Dataset\n", + "from_tensor_slices\n", + "\n", + "To build the dataset we will use tf.data.Dataset set of elements. " + ] + }, + { + "metadata": { + "id": "tvbH7NGjpT6v", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "dataset = tf.data.Dataset.from_tensor_slices((train_plantfeatures, y_categorical))\n", + "dataset = dataset.batch(32)\n", + "dataset = dataset.shuffle(1000)\n", + "dataset = dataset.repeat()" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "3gj9O69fdCFA", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "dataset_test = tf.data.Dataset.from_tensor_slices((test_plantfeatures, y_categorical_test))\n", + "dataset_test = dataset_test.batch(32)\n", + "dataset_test = dataset_test.shuffle(1000)\n", + "dataset_test = dataset_test.repeat()" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "botE10MRRR4Y", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "## Build the Model" + ] + }, + { + "metadata": { + "id": "GboZlVonPAPH", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "model = tf.keras.Sequential([\n", + " tf.keras.layers.Dense(16, input_dim=4),\n", + " tf.keras.layers.Dense(3, activation=tf.nn.softmax),\n", + "])\n", + "\n", + "model.summary()" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "MJDJFdpJ3WsF", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "model.compile(loss='categorical_crossentropy',\n", + " optimizer='sgd',\n", + " metrics=['accuracy'])" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "ydyf8tTawqSb", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "## Train the Model" + ] + }, + { + "metadata": { + "id": "jgM1M4HhwpWO", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "model.fit(dataset, steps_per_epoch=32, epochs=100, verbose=1)" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "S9S_jNi9SYyW", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "## Eval the model" + ] + }, + { + "metadata": { + "id": "0PYDj5XnwJUv", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "loss, accuracy = model.evaluate(dataset_test, steps=32)\n", + "\n", + "print(\"loss:%f\"% (loss))\n", + "print(\"accuracy: %f\"% (accuracy))" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "o3elJjo4epBQ", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "## Use the model" + ] + }, + { + "metadata": { + "id": "UZmCpwEFfJrE", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "If you need to test another specie, you can modify the **new_specie** array." + ] + }, + { + "metadata": { + "id": "I3cZF27oeXDv", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "new_specie = np.array([7.9,3.8,6.4,2.0])\n", + "predition = np.around(model.predict(np.expand_dims(new_specie, axis=0))).astype(np.int)[0]\n", + "print(\"This species should be %s\" % species[predition.astype(np.bool)][0])" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "dHpXgdil2ipR", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "# Save the model" + ] + }, + { + "metadata": { + "id": "oZNFf5HbybVP", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "!mkdir model" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "y3g4V4tOzC8v", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "tf.keras.models.save_model(\n", + " model,\n", + " \"./model/iris_model.h5\",\n", + " overwrite=True,\n", + " include_optimizer=True\n", + ")" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "s5y6VMkkzz90", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "new_model = tf.keras.models.load_model(\"./model/iris_model.h5\")\n", + "\n", + "xarray2 = np.array([7.9,3.8,6.4,2.0])\n", + "\n", + "pred = np.around(new_model.predict(np.expand_dims(xarray2, axis=0))).astype(np.int)[0]\n", + "\n", + "print(pred)\n", + "\n", + "print(\"That means it's a %s\" % species[pred.astype(np.bool)][0])" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "CqugxBws1r1o", + "colab_type": "text" + }, + "cell_type": "markdown", + "source": [ + "# Visualize the Graph" + ] + }, + { + "metadata": { + "id": "w5U0UOk61AgH", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "graph = tf.get_default_graph()" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "uIpQP68F1OjP", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "# Let's visualize our graph!\n", + "# Tip: to make your graph more readable you can add a\n", + "# name=\"...\" parameter to the individual Ops.\n", + "\n", + "# src: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb\n", + "# requeried if is not importated before\n", + "# import tensorflow as tf\n", + "# import numpy as np\n", + "\n", + "from IPython.display import clear_output, Image, display, HTML\n", + "\n", + "def strip_consts(graph_def, max_const_size=32):\n", + " \"\"\"Strip large constant values from graph_def.\"\"\"\n", + " strip_def = tf.GraphDef()\n", + " for n0 in graph_def.node:\n", + " n = strip_def.node.add() \n", + " n.MergeFrom(n0)\n", + " if n.op == 'Const':\n", + " tensor = n.attr['value'].tensor\n", + " size = len(tensor.tensor_content)\n", + " if size > max_const_size:\n", + " tensor.tensor_content = \"\"%size\n", + " return strip_def\n", + "\n", + "def show_graph(graph_def, max_const_size=32):\n", + " \"\"\"Visualize TensorFlow graph.\"\"\"\n", + " if hasattr(graph_def, 'as_graph_def'):\n", + " graph_def = graph_def.as_graph_def()\n", + " strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n", + " code = \"\"\"\n", + " \n", + " \n", + "
\n", + " \n", + "
\n", + " \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n", + "\n", + " iframe = \"\"\"\n", + " \n", + " \"\"\".format(code.replace('\"', '"'))\n", + " display(HTML(iframe))" + ], + "execution_count": 0, + "outputs": [] + }, + { + "metadata": { + "id": "510AgV5K1VPD", + "colab_type": "code", + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } + }, + "cell_type": "code", + "source": [ + "show_graph(graph)" + ], + "execution_count": 0, + "outputs": [] + } + ] +} \ No newline at end of file diff --git a/translations/it-IT/experiences/bq_integration/README.md b/translations/it-IT/experiences/bq_integration/README.md new file mode 100644 index 0000000..f22c6e1 --- /dev/null +++ b/translations/it-IT/experiences/bq_integration/README.md @@ -0,0 +1,39 @@ +# BigQuery to TF example + +This example proposes an integration of information from BigQuery to train a model using TensorFlow and Keras. + +For this integration example was used this module of [pandas](https://pandas.pydata.org/): + +- pandas-gbq, [more information](https://pandas-gbq.readthedocs.io/en/latest/) + +## Connect to Google Cloud + +for this operation it is recommended to use a service access. + +The example uses: + +- ConfigParser: methodology to extract the necessary information from a configuration file. + +file format: config.env + +format: + + [google] + cloud_id=projectid + service_key=servicekey.json + + +> Note: remember to create the configuration file and update these values to run the example. + +reading the configuration: + + config.read('config.env') + + +Configuring Cloud Project: ```project_id = config.get('google','cloud_id')``` + +Configuring service-key for the Cloud Project: + + df_train = pd.io.gbq.read_gbq('''SELECT * FROM [socialagilelearning:iris.training]''', project_id=project_id, private_key=config.get('google','service_key'), verbose=False) + +> Note: private_key \ No newline at end of file diff --git a/translations/it-IT/experiences/object_detection_5steps/README.md b/translations/it-IT/experiences/object_detection_5steps/README.md new file mode 100644 index 0000000..314626e --- /dev/null +++ b/translations/it-IT/experiences/object_detection_5steps/README.md @@ -0,0 +1,5 @@ +config script + +chmod +x /path/to/yourscript.sh + +./yourscript.sh \ No newline at end of file diff --git a/translations/it-IT/experiences/serving_clientapi_inception/README.md b/translations/it-IT/experiences/serving_clientapi_inception/README.md new file mode 100644 index 0000000..68af4ed --- /dev/null +++ b/translations/it-IT/experiences/serving_clientapi_inception/README.md @@ -0,0 +1,15 @@ +# categories + +* daisy +* dandelion +* roses +* sunflowers +* tulips + +docker container poets_inception3 + +# Model Support + +## Analyzing signatures + +python ./tensorflow/python/tools/saved_model_cli.py show --dir ./saved_model --all \ No newline at end of file diff --git a/translations/it-IT/experiences/serving_kubernetes_inception/README.md b/translations/it-IT/experiences/serving_kubernetes_inception/README.md new file mode 100644 index 0000000..d25867e --- /dev/null +++ b/translations/it-IT/experiences/serving_kubernetes_inception/README.md @@ -0,0 +1,37 @@ +# Kubernetes Engine configuration + +## gcloud configuration + +Creating cluster + + gcloud container clusters create inception-retrained-serving-cluster --num-nodes 1 --zone us-central1-f + + +Cluster Configuration + + gcloud config set container/cluster inception-retrained-serving-cluster + + + gcloud container clusters get-credentials inception-retrained-serving-cluster --zone us-central1-f + + +## kubectl configuration + + kubectl create -f kubernetes_config.yaml + + +### kubectl checks + + kubectl get deployments + kubectl get pods + kubectl get services + kubectl describe service inception-retrained-service + + +# Docker image + +Docker image created to implement Iris queries across serving + +[serving_iris](https://hub.docker.com/r/nbortolotti/serving_iris/) + +*use version 2 tag. \ No newline at end of file diff --git a/translations/it-IT/experiences/serving_kubernetes_iris/README.md b/translations/it-IT/experiences/serving_kubernetes_iris/README.md new file mode 100644 index 0000000..7f4fea3 --- /dev/null +++ b/translations/it-IT/experiences/serving_kubernetes_iris/README.md @@ -0,0 +1,37 @@ +# Kubernetes Engine configuration + +## gcloud configuration + +Creating cluster + + gcloud container clusters create iris-serving-cluster --num-nodes 1 --zone us-central1-f + + +Cluster Configuration + + gcloud config set container/cluster iris-serving-cluster + + + gcloud container clusters get-credentials iris-serving-cluster --zone us-central1-f + + +## kubectl configuration + + kubectl create -f kubernetes_config.yaml + + +### kubectl checks + + kubectl get deployments + kubectl get pods + kubectl get services + kubectl describe service iris-service + + +# Docker image + +Docker image created to implement Iris queries across serving + +[serving_iris](https://hub.docker.com/r/nbortolotti/serving_iris/) + +*use version 2 tag. \ No newline at end of file diff --git a/translations/it-IT/jupyter/basic_math_functions.ipynb b/translations/it-IT/jupyter/basic_math_functions.ipynb new file mode 100644 index 0000000..8aa4d90 --- /dev/null +++ b/translations/it-IT/jupyter/basic_math_functions.ipynb @@ -0,0 +1,352 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "import of the tensorflow library: essential to start the interaction with tensorflow" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import tensorflow as tf" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "1.12.0\n" + ] + } + ], + "source": [ + "# check tf version\n", + "print(tf.__version__)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Config Contants" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "in the variable \"a\" we are going to assign a constant with the initial value of \"2\"" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "a = tf.constant(2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "in the variable \"b\" we are going to assign a constant with the initial value of \"5\"" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "b = tf.constant(5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the following variable \"operation\" we will define a sum by applying \"add\". As a parameter we will use the constants defined above. \"a\" and \"b\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Constants - Sum" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "operation = tf.add(a, b, name='cons_add')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[link documentacion oficial - add](https://www.tensorflow.org/api_docs/python/tf/add)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After the definitions of the constants and the operation for this example, we are going to start a session in tensorflow. Then calling the \"run\" method we executing the operation in the tensorflow graph" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "7\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(operation)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Constants - Subtraction" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "sub_operation = tf.subtract(a, b, name='cons_subtraction')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[link documentacion oficial - subtract](https://www.tensorflow.org/api_docs/python/tf/add)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(sub_operation)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Simple Math Function - tf.abs" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "x = tf.constant([[-1.37 + 2.57j], [-3.37 + 5.33j]])" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "abs_function = tf.abs(x)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[official documentation](https://www.tensorflow.org/api_docs/python/tf/abs)" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[2.912353]\n", + " [6.306013]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(abs_function)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# tf.negative" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "pos_tensor = tf.constant([[5],[7]])" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "negative_function = tf.negative(pos_tensor)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[official documentation](https://www.tensorflow.org/api_docs/python/tf/negative)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[-5]\n", + " [-7]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(negative_function)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# tf.sign" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "sign_tensor = tf.constant([[5]])" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [], + "source": [ + "sign_function = tf.sign(sign_tensor)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[official documentation](https://www.tensorflow.org/api_docs/python/tf/sign) " + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[1]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(sign_function)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "tfcodestylekernel", + "language": "python", + "name": "tfcodestylekernel" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 2 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython2", + "version": "2.7.14+" + } + }, + "nbformat": 4, + "nbformat_minor": 1 +} diff --git a/translations/it-IT/jupyter/constant_types.ipynb b/translations/it-IT/jupyter/constant_types.ipynb new file mode 100644 index 0000000..b4facfb --- /dev/null +++ b/translations/it-IT/jupyter/constant_types.ipynb @@ -0,0 +1,406 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "import tensorflow as tf" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "vec = tf.constant([7,7], name='vector')" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "mat = tf.constant([[7,7],[9,9]], name='matrix')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[offitial documentation link](https://www.tensorflow.org/api_guides/python/constant_op)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + " create tensors whose elements are of a specific value " + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "shape_tensor = tf.zeros([2,3],tf.int32)" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[0 0 0]\n", + " [0 0 0]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(shape_tensor)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "tensor of shape and type (unless type is specified) as the input_tensor but all elements are zeros" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "input_tensor_model = [[1,2],[3,4],[5,6]]" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "zeroslike_tensor = tf.zeros_like(input_tensor_model)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[0 0]\n", + " [0 0]\n", + " [0 0]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(zeroslike_tensor)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "tensor of shape and all elements are ones" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "shape_one_tensor = tf.ones([3,3],tf.int32)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[1 1 1]\n", + " [1 1 1]\n", + " [1 1 1]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(shape_one_tensor)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "tensor of shape and type (unless type is specified) as the input_tensor but all elements are ones." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "input_tensor_model_ones = [[1,2],[3,4],[5,6]]" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "onelikes = tf.ones_like(input_tensor_model_ones)" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[1 1]\n", + " [1 1]\n", + " [1 1]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(onelikes)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "create a tensor filled with a scalar value" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "tensor_scalar = tf.fill([3, 3], 8)" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[8 8 8]\n", + " [8 8 8]\n", + " [8 8 8]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(tensor_scalar)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "creating constants that are sequences " + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "tensor_lin = tf.linspace(50.0 ,55.0, 5 , name=\"linspace\")" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[ 50. 51.25 52.5 53.75 55. ]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(tensor_lin)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "create a sequence of numbers that begins at start and extends by increments of delta up to but not including limit" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "tensor_range = tf.range(3 ,15 , 3)" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[ 3 6 9 12]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(tensor_range)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "TensorFlow sequences are not iterable" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0\n", + "1\n", + "2\n", + "3\n" + ] + } + ], + "source": [ + "for a in range(4):\n", + " print a" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": {}, + "outputs": [ + { + "ename": "TypeError", + "evalue": "'Tensor' object is not iterable.", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0;32mfor\u001b[0m \u001b[0mb\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0;32mprint\u001b[0m \u001b[0mb\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc\u001b[0m in \u001b[0;36m__iter__\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 474\u001b[0m \u001b[0mTypeError\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mwhen\u001b[0m \u001b[0minvoked\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 475\u001b[0m \"\"\"\n\u001b[0;32m--> 476\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mTypeError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"'Tensor' object is not iterable.\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 477\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 478\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m__bool__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;31mTypeError\u001b[0m: 'Tensor' object is not iterable." + ] + } + ], + "source": [ + "for b in tf.range(4):\n", + " print b" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "tfcodestylekernel", + "language": "python", + "name": "tfcodestylekernel" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 2 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython2", + "version": "2.7.14+" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/translations/it-IT/jupyter/inputs_readers.ipynb b/translations/it-IT/jupyter/inputs_readers.ipynb new file mode 100644 index 0000000..474aeb2 --- /dev/null +++ b/translations/it-IT/jupyter/inputs_readers.ipynb @@ -0,0 +1,324 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Placeholder" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "import tensorflow as tf" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "import numpy as np" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "x = tf.placeholder(tf.float32, shape=(30,30))" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "y = tf.placeholder(tf.float32, shape=(30,30))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[oficial documentation link](https://www.tensorflow.org/api_docs/python/tf/placeholder) " + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "mul_operation = tf.matmul(x,y)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "ename": "InvalidArgumentError", + "evalue": "You must feed a value for placeholder tensor 'Placeholder_2' with dtype float and shape [30,30]\n\t [[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=[30,30], _device=\"/job:localhost/replica:0/task:0/cpu:0\"]()]]\n\nCaused by op u'Placeholder_2', defined at:\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 162, in _run_module_as_main\n \"__main__\", fname, loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel_launcher.py\", line 16, in \n app.launch_new_instance()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/traitlets/config/application.py\", line 658, in launch_instance\n app.start()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/kernelapp.py\", line 477, in start\n ioloop.IOLoop.instance().start()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/zmq/eventloop/ioloop.py\", line 177, in start\n super(ZMQIOLoop, self).start()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tornado/ioloop.py\", line 888, in start\n handler_func(fd_obj, events)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tornado/stack_context.py\", line 277, in null_wrapper\n return fn(*args, **kwargs)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py\", line 440, in _handle_events\n self._handle_recv()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py\", line 472, in _handle_recv\n self._run_callback(callback, msg)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py\", line 414, in _run_callback\n callback(*args, **kwargs)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tornado/stack_context.py\", line 277, in null_wrapper\n return fn(*args, **kwargs)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/kernelbase.py\", line 283, in dispatcher\n return self.dispatch_shell(stream, msg)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/kernelbase.py\", line 235, in dispatch_shell\n handler(stream, idents, msg)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/kernelbase.py\", line 399, in execute_request\n user_expressions, allow_stdin)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/ipkernel.py\", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/zmqshell.py\", line 533, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/IPython/core/interactiveshell.py\", line 2718, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/IPython/core/interactiveshell.py\", line 2822, in run_ast_nodes\n if self.run_code(code, result):\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/IPython/core/interactiveshell.py\", line 2882, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)\n File \"\", line 1, in \n y = tf.placeholder(tf.float32, shape=(30,30))\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py\", line 1548, in placeholder\n return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py\", line 2094, in _placeholder\n name=name)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py\", line 767, in apply_op\n op_def=op_def)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/framework/ops.py\", line 2630, in create_op\n original_op=self._default_original_op, op_def=op_def)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/framework/ops.py\", line 1204, in __init__\n self._traceback = self._graph._extract_stack() # pylint: disable=protected-access\n\nInvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype float and shape [30,30]\n\t [[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=[30,30], _device=\"/job:localhost/replica:0/task:0/cpu:0\"]()]]\n", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mInvalidArgumentError\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSession\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0msess\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0;32mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msess\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmul_operation\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", + "\u001b[0;32m/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/client/session.pyc\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, fetches, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m 893\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 894\u001b[0m result = self._run(None, fetches, feed_dict, options_ptr,\n\u001b[0;32m--> 895\u001b[0;31m run_metadata_ptr)\n\u001b[0m\u001b[1;32m 896\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mrun_metadata\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 897\u001b[0m \u001b[0mproto_data\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtf_session\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTF_GetBuffer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrun_metadata_ptr\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/client/session.pyc\u001b[0m in \u001b[0;36m_run\u001b[0;34m(self, handle, fetches, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m 1122\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mfinal_fetches\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0mfinal_targets\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mhandle\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mfeed_dict_tensor\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1123\u001b[0m results = self._do_run(handle, final_targets, final_fetches,\n\u001b[0;32m-> 1124\u001b[0;31m feed_dict_tensor, options, run_metadata)\n\u001b[0m\u001b[1;32m 1125\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1126\u001b[0m \u001b[0mresults\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/client/session.pyc\u001b[0m in \u001b[0;36m_do_run\u001b[0;34m(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m 1319\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mhandle\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1320\u001b[0m return self._do_call(_run_fn, self._session, feeds, fetches, targets,\n\u001b[0;32m-> 1321\u001b[0;31m options, run_metadata)\n\u001b[0m\u001b[1;32m 1322\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1323\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_do_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0m_prun_fn\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_session\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mhandle\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfeeds\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfetches\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/client/session.pyc\u001b[0m in \u001b[0;36m_do_call\u001b[0;34m(self, fn, *args)\u001b[0m\n\u001b[1;32m 1338\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mKeyError\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1339\u001b[0m \u001b[0;32mpass\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1340\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mtype\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnode_def\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mop\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmessage\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1341\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1342\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_extend_graph\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;31mInvalidArgumentError\u001b[0m: You must feed a value for placeholder tensor 'Placeholder_2' with dtype float and shape [30,30]\n\t [[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=[30,30], _device=\"/job:localhost/replica:0/task:0/cpu:0\"]()]]\n\nCaused by op u'Placeholder_2', defined at:\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 162, in _run_module_as_main\n \"__main__\", fname, loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel_launcher.py\", line 16, in \n app.launch_new_instance()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/traitlets/config/application.py\", line 658, in launch_instance\n app.start()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/kernelapp.py\", line 477, in start\n ioloop.IOLoop.instance().start()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/zmq/eventloop/ioloop.py\", line 177, in start\n super(ZMQIOLoop, self).start()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tornado/ioloop.py\", line 888, in start\n handler_func(fd_obj, events)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tornado/stack_context.py\", line 277, in null_wrapper\n return fn(*args, **kwargs)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py\", line 440, in _handle_events\n self._handle_recv()\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py\", line 472, in _handle_recv\n self._run_callback(callback, msg)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py\", line 414, in _run_callback\n callback(*args, **kwargs)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tornado/stack_context.py\", line 277, in null_wrapper\n return fn(*args, **kwargs)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/kernelbase.py\", line 283, in dispatcher\n return self.dispatch_shell(stream, msg)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/kernelbase.py\", line 235, in dispatch_shell\n handler(stream, idents, msg)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/kernelbase.py\", line 399, in execute_request\n user_expressions, allow_stdin)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/ipkernel.py\", line 196, in do_execute\n res = shell.run_cell(code, store_history=store_history, silent=silent)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/ipykernel/zmqshell.py\", line 533, in run_cell\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/IPython/core/interactiveshell.py\", line 2718, in run_cell\n interactivity=interactivity, compiler=compiler, result=result)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/IPython/core/interactiveshell.py\", line 2822, in run_ast_nodes\n if self.run_code(code, result):\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/IPython/core/interactiveshell.py\", line 2882, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)\n File \"\", line 1, in \n y = tf.placeholder(tf.float32, shape=(30,30))\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py\", line 1548, in placeholder\n return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py\", line 2094, in _placeholder\n name=name)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py\", line 767, in apply_op\n op_def=op_def)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/framework/ops.py\", line 2630, in create_op\n original_op=self._default_original_op, op_def=op_def)\n File \"/Users/nickbortolotti/tensordev/lib/python2.7/site-packages/tensorflow/python/framework/ops.py\", line 1204, in __init__\n self._traceback = self._graph._extract_stack() # pylint: disable=protected-access\n\nInvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype float and shape [30,30]\n\t [[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=[30,30], _device=\"/job:localhost/replica:0/task:0/cpu:0\"]()]]\n" + ] + } + ], + "source": [ + "with tf.Session() as sess:\n", + " print(sess.run(mul_operation))" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[ 6.55514956 5.98157263 5.67549038 6.91161346 6.30936575\n", + " 7.29762983 5.86539173 5.27937555 6.75599909 8.01272964\n", + " 5.93891239 5.91296148 5.5567379 5.0130825 6.20059156\n", + " 8.12601376 6.12491894 6.86421824 5.40394783 7.47838783\n", + " 6.44186258 6.31801033 7.42300892 7.12287283 6.5068388\n", + " 4.54562759 4.69501066 6.30458784 4.78701925 6.78749037]\n", + " [ 7.52928495 7.19164181 7.74504089 8.31586075 9.06933022\n", + " 8.67265987 6.06688404 5.84172297 8.07712746 8.89087009\n", + " 7.12125778 7.56713867 6.32710171 6.38054752 8.19417572\n", + " 8.94661331 7.2706337 8.25290775 6.90167904 7.97312784\n", + " 8.20538902 8.16919899 9.68170166 7.98358393 7.78583908\n", + " 7.23642683 6.57634449 7.20273733 7.0994873 8.08538437]\n", + " [ 8.88010406 7.9214592 7.52107954 8.82306385 9.13358021\n", + " 9.20722961 6.38094902 5.44702673 8.48978806 9.1189003\n", + " 6.68314838 6.89194107 6.33950186 6.25048161 7.88809347\n", + " 9.38162994 8.5670681 8.84123039 7.11370277 8.2012291\n", + " 9.00372887 7.99460316 10.13688183 7.87247705 8.12241173\n", + " 7.64464426 7.25742245 5.97010422 6.74863911 8.29688644]\n", + " [ 5.50896072 5.63052988 5.71548653 6.19115305 6.89038515\n", + " 7.00361919 4.66096544 4.89522266 6.12762022 7.82392073\n", + " 5.72966337 5.75240946 4.95558548 5.1300106 6.6310091\n", + " 7.27561474 6.64687729 7.18622828 5.11186409 6.17855978\n", + " 6.49462938 6.41566277 7.42046785 6.30308008 6.26193142\n", + " 6.32257652 5.52727795 5.96861172 5.48388672 6.57345867]\n", + " [ 9.39098358 8.5291481 8.07226276 9.17428112 9.30619812\n", + " 9.68422413 7.38496542 6.30138016 8.96599197 10.02546692\n", + " 8.35785007 7.74089241 7.31194448 6.99264383 8.75185394\n", + " 10.17792606 9.58902359 9.81565666 7.14851427 9.03500652\n", + " 9.37500477 8.96388245 10.7710371 9.31223774 9.14043903\n", + " 7.56164646 7.29196501 7.51239347 7.27482843 9.49883461]\n", + " [ 7.84468174 6.99951792 6.79346848 8.58141232 8.15105343\n", + " 8.13616085 6.78306866 5.9145484 8.12870979 9.2822752\n", + " 7.00458241 6.91780806 4.89304829 6.97554779 8.44246197\n", + " 8.62291622 7.7620039 8.53799343 6.64070797 6.91604042\n", + " 8.11308384 8.32809544 9.10895729 7.70644093 8.09040642\n", + " 7.06038952 6.27955914 7.49378777 5.62631369 9.0289793 ]\n", + " [ 7.630548 6.72210836 7.43189812 8.30769539 7.58934689\n", + " 8.35867691 5.61987782 5.82066107 7.6192708 8.71374035\n", + " 7.10612631 6.50386906 6.95289421 6.42950726 7.42195702\n", + " 8.62451649 7.58481693 8.75608444 6.76625729 7.90651608\n", + " 7.95900011 7.69957018 9.24443626 7.83114195 7.26905489\n", + " 6.67858219 5.28185463 6.68439627 6.26899815 8.11745834]\n", + " [ 7.07073212 7.05440998 7.73583174 8.74954796 8.94497013\n", + " 9.89613152 6.54055882 6.94223452 8.37814999 9.52572727\n", + " 7.49551582 7.55020332 7.71467209 6.58186054 8.51280403\n", + " 8.39938831 7.36799145 8.49639702 7.43029404 7.71783257\n", + " 7.95786762 8.35060692 9.89248848 7.79072618 7.65472078\n", + " 7.65395737 6.694098 7.2355504 6.64567423 8.73149109]\n", + " [ 7.21198082 6.76962852 6.13576698 7.12857962 8.29301071\n", + " 7.61181211 5.74702215 5.00591803 6.81409025 7.79617023\n", + " 6.13459206 6.3591342 5.27783728 4.96897697 6.65436363\n", + " 7.79567289 6.84622002 6.85290337 5.55385399 6.65383911\n", + " 6.68806458 6.83874464 8.13874531 7.21435165 7.09992123\n", + " 6.65146112 6.74285221 5.75644922 5.53201914 7.74492836]\n", + " [ 7.35146189 7.22826481 7.12697029 7.48994493 8.10681629\n", + " 8.0514183 6.02127552 5.9041934 7.43621588 9.17849541\n", + " 6.80423927 6.99943304 6.33089828 6.20376158 8.10361004\n", + " 8.90962696 7.79886723 8.18743515 5.59403515 7.17276525\n", + " 7.49138212 7.64793062 8.89507008 8.26077461 8.27003479\n", + " 7.05020142 6.67594147 6.54102421 6.63358068 8.5091753 ]\n", + " [ 7.29885435 6.40723276 7.09128428 7.57372618 7.98364449\n", + " 7.75040627 5.7218318 5.78910065 7.36667728 9.51279163\n", + " 7.26471186 6.80208111 5.61632347 5.95955706 6.66995955\n", + " 8.08033848 7.27949238 8.25547886 6.14833355 6.58692074\n", + " 8.77822208 7.10999393 8.96642971 7.55797911 7.92681742\n", + " 6.60890436 5.93921661 7.1777215 5.6526947 8.02938175]\n", + " [ 7.65069294 8.28373718 7.332726 7.9514823 7.78771925\n", + " 7.49050188 6.5332489 5.61412477 7.95184565 8.51926041\n", + " 6.38451481 6.55379486 6.82214022 5.92563343 7.84442616\n", + " 8.0818367 7.56778526 7.99830437 5.57829094 7.88072729\n", + " 7.16960192 8.22597027 8.71989918 8.31769085 7.62953568\n", + " 6.75315332 5.73745537 6.39911461 6.35180712 8.4231863 ]\n", + " [ 5.24113035 4.90378761 6.60406923 7.29968309 7.71310043\n", + " 7.25649643 4.74110222 5.25378895 6.20715046 8.20605183\n", + " 5.7320652 7.03307962 5.79052114 4.79773903 6.1015811\n", + " 6.7945385 6.15453577 6.8776474 6.12251568 6.15110779\n", + " 6.5760355 5.16829205 6.90359449 6.68565559 6.39670753\n", + " 5.43943691 5.9136796 5.2243619 4.85000277 6.60350132]\n", + " [ 7.51804018 7.68398666 6.46891928 7.3489213 8.59143448\n", + " 8.1364975 5.24844313 4.44432688 7.00886822 7.79084587\n", + " 5.97645521 5.42914438 5.72801352 4.45001125 6.45954132\n", + " 7.81767321 7.78030491 7.48388863 6.18799973 6.74146938\n", + " 7.37120819 6.90036058 9.17392445 6.87878799 7.14961433\n", + " 6.99686527 6.35949516 5.90089273 6.1850791 8.21821499]\n", + " [ 7.14157438 6.76864433 7.33916855 8.32661629 8.55265903\n", + " 9.35774899 7.04334402 5.77758408 7.98707247 9.16361237\n", + " 7.0695982 7.20945501 7.03555632 6.78860474 9.17408276\n", + " 8.13868999 7.65542507 8.59090328 6.51726437 6.58541679\n", + " 7.07048655 7.90093279 8.84634781 7.47098684 7.06447077\n", + " 6.49440384 7.10394049 6.5658145 5.9776969 7.85037851]\n", + " [ 6.58542299 6.0790801 6.40641022 7.62692738 7.99206543\n", + " 7.92696857 5.88691187 5.14889956 6.93054819 8.09530354\n", + " 6.68662643 6.26023054 5.61660576 4.66334724 6.13170147\n", + " 7.62064075 7.29650784 6.77367258 6.0405755 7.15198088\n", + " 8.20949936 6.21924734 8.04718208 7.77018499 7.31255102\n", + " 5.33275652 6.63260555 6.18911982 4.75655079 7.11486387]\n", + " [ 7.75840235 7.02256393 8.27073097 7.63310814 8.34430027\n", + " 8.42568874 6.2263751 6.02193069 7.64450788 9.92877102\n", + " 6.97666168 7.07282162 7.07599735 6.18614626 7.14529228\n", + " 9.16696262 7.16287184 9.13656521 6.72440481 7.68882704\n", + " 8.31639004 7.74466038 9.27768612 8.30001259 7.43634558\n", + " 6.73357105 6.19406176 6.53283453 6.46929598 8.0943203 ]\n", + " [ 7.08415937 6.06635666 7.7296319 8.55686855 7.99181843\n", + " 8.55969238 5.5727005 6.37568235 7.51577139 8.73929119\n", + " 6.9753089 6.68845177 6.77078867 6.2530632 7.61837149\n", + " 7.96922207 7.64913177 8.56403446 7.47597647 7.86112595\n", + " 7.74574423 7.58545399 9.15137291 8.35510635 6.96809196\n", + " 6.46357012 5.94350481 6.65748215 5.93962908 8.1520462 ]\n", + " [ 6.6877265 5.90047693 6.20709324 7.1020937 7.06684732\n", + " 7.71013165 6.00899458 5.60770464 7.53691006 7.85087442\n", + " 6.3180089 6.30591679 5.28581095 5.92449093 7.37011051\n", + " 6.99775219 6.41406155 7.38541079 6.05070877 6.76010084\n", + " 8.04256058 7.0003643 8.06193066 7.27154922 6.9877038\n", + " 5.44806862 5.64090252 5.80135345 5.36311531 6.91298866]\n", + " [ 7.87393141 6.78262854 7.87706661 9.45042324 7.77023602\n", + " 8.62242031 5.89228868 6.09080219 8.46074867 9.32420444\n", + " 7.60661221 6.8085885 6.91494846 6.62345695 8.291008\n", + " 8.83088684 7.65835619 9.66969109 6.26465321 7.91936731\n", + " 8.00329018 7.98553801 9.85692215 8.31568432 7.38676023\n", + " 7.29641247 6.34414625 6.68784571 7.12172079 7.55763006]\n", + " [ 6.5967207 4.97262478 6.37341118 7.19968605 7.62475395\n", + " 7.9231534 4.71958399 5.59758091 6.92910957 8.21800709\n", + " 6.71846104 6.55256367 5.24890137 5.98439884 6.97179365\n", + " 7.5811739 6.02482128 7.88515854 6.73437834 6.90382862\n", + " 7.40287066 6.12446356 7.83396292 7.54556179 7.0139637\n", + " 5.83753109 5.99430227 5.96264601 5.72563982 6.82760859]\n", + " [ 7.56653976 7.42061853 6.95266438 7.98510265 7.71447945\n", + " 8.33559895 6.0922842 5.28436136 7.67838955 8.60217762\n", + " 6.68671131 5.92205048 6.02640772 5.60590267 7.6282692\n", + " 8.29330158 6.78710175 7.9859333 5.49271441 6.16171551\n", + " 7.12629032 7.99047565 9.08075619 6.27402496 7.17485237\n", + " 6.89549732 6.50034809 6.23915052 6.43959713 7.86613083]\n", + " [ 7.612854 6.66432142 6.46141434 7.09766436 7.7599721\n", + " 7.35928106 5.74510765 4.88394594 6.57416439 8.22092342\n", + " 6.60679483 6.40763235 5.89842653 5.86187363 6.5203104\n", + " 8.20164585 6.93883419 7.98796606 5.70791149 6.51561832\n", + " 7.46210718 7.38422251 8.75590611 6.9306159 7.70879173\n", + " 6.15866089 5.96874809 5.5755434 5.83038187 7.98277617]\n", + " [ 7.96946573 8.00472069 7.40531635 7.75501347 8.76024532\n", + " 7.73052931 7.29216385 4.66858625 7.37778759 8.95133114\n", + " 6.52824593 7.42891359 6.38789988 6.95154858 8.26288033\n", + " 8.8499155 8.85161114 8.24614048 6.08904505 7.02066231\n", + " 8.52806282 8.55503464 8.60087299 7.76870394 8.48797703\n", + " 6.40466833 6.76599693 6.38871241 6.54959726 8.16113377]\n", + " [ 8.2178688 7.8357482 7.88005495 8.7261858 9.20080185\n", + " 9.2935133 6.85369968 5.74188042 7.94073153 9.28508377\n", + " 7.22323322 6.93986988 7.25027847 6.20463848 8.43846321\n", + " 8.8446188 9.31677151 9.25234413 6.96191216 7.77645683\n", + " 9.16576195 8.57133865 10.2551403 8.17732048 8.55501842\n", + " 6.2307024 7.43507099 6.03790522 8.14116764 7.86497593]\n", + " [ 9.30639362 8.09539223 6.94142246 9.69476223 10.34078217\n", + " 9.31997776 6.87897873 6.15351486 8.38557339 10.03732109\n", + " 7.84384727 7.77661419 6.65858889 6.56689453 7.94162226\n", + " 9.83378983 9.17316818 8.80963707 7.08180046 8.03073978\n", + " 8.95717716 8.39988232 9.88895702 8.26504803 8.45853615\n", + " 7.27670479 8.27409935 7.11858273 7.3381319 8.76059246]\n", + " [ 8.27994537 7.47151613 7.39037418 8.19252396 8.92707348\n", + " 9.20363712 6.3692193 6.16572952 7.93523741 9.13144207\n", + " 6.63869524 6.65471411 6.42793131 6.00457144 7.54485416\n", + " 8.9062767 8.01108265 8.1375494 6.96323824 7.19684792\n", + " 8.18022728 8.17191124 9.86188316 7.47491789 8.0884819\n", + " 7.2051363 7.15702581 5.83481693 6.31641674 8.51599503]\n", + " [ 8.99430752 7.87994385 7.73677683 9.1900053 9.46256828\n", + " 9.48914814 6.49690628 6.20586348 8.5277071 10.09221172\n", + " 7.225142 7.93836689 7.89407921 6.53925323 7.60340071\n", + " 9.95259285 8.91679668 8.86633873 8.03504372 9.24525928\n", + " 8.87607002 7.69219303 9.63944817 8.96976662 8.65670586\n", + " 6.88574839 7.16039276 7.01674509 7.17444468 8.43267345]\n", + " [ 7.61176491 6.90721703 6.14912844 7.49377489 6.67651701\n", + " 6.96974707 5.72969866 4.80815744 6.9804635 7.79471064\n", + " 5.6380868 5.86763525 6.20046186 5.94233227 6.71553612\n", + " 7.95698357 7.42765951 8.04541683 5.26985598 6.52537489\n", + " 6.3698535 7.30641031 7.81600904 6.61104393 7.0490799\n", + " 5.67393017 5.71840382 4.30097675 6.30883121 6.56782722]\n", + " [ 8.93607521 8.18675613 8.65964508 9.70672417 9.83744812\n", + " 10.59879208 7.71812344 7.52198172 9.56911945 11.22584534\n", + " 8.79057407 8.56864643 7.64610481 7.84741068 9.39026356\n", + " 10.31541538 9.08096695 10.49913025 7.69223785 8.73773479\n", + " 9.90289497 9.95057106 11.56739521 8.77065468 9.35468483\n", + " 8.42406464 7.9427352 8.01517487 8.04949188 9.00447083]]\n" + ] + } + ], + "source": [ + "with tf.Session() as sess:\n", + " x_random = np.random.rand(30,30)\n", + " y_random = np.random.rand(30,30)\n", + " print(sess.run(mul_operation, feed_dict={x:x_random, y:y_random}))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "tfcodestylekernel", + "language": "python", + "name": "tfcodestylekernel" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 2 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython2", + "version": "2.7.14+" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/translations/it-IT/jupyter/matrix_math_functions.ipynb b/translations/it-IT/jupyter/matrix_math_functions.ipynb new file mode 100644 index 0000000..72d787a --- /dev/null +++ b/translations/it-IT/jupyter/matrix_math_functions.ipynb @@ -0,0 +1,156 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "import tensorflow as tf" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Matrix functions" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## tf.diag" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [], + "source": [ + "matrix_diag = tf.constant([7,8,9,10])" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "matrix_diag_function = tf.diag(matrix_diag)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[official documentation](https://www.tensorflow.org/api_docs/python/tf/diag)" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[ 7 0 0 0]\n", + " [ 0 8 0 0]\n", + " [ 0 0 9 0]\n", + " [ 0 0 0 10]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(matrix_diag_function)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## tf.transpose" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "matrix_transp = tf.constant([[7,8,9],[10,11,12]])" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "matrix_transp_function = tf.transpose(matrix_transp)" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[ 7 10]\n", + " [ 8 11]\n", + " [ 9 12]]\n" + ] + } + ], + "source": [ + "with tf.Session() as ses:\n", + " print ses.run(matrix_transp_function)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "tfcodestylekernel", + "language": "python", + "name": "tfcodestylekernel" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 2 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython2", + "version": "2.7.14+" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}