Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/improve visual notebook #53

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
136 changes: 103 additions & 33 deletions Visual_Analysis.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -40,15 +40,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# Make sure you have the required pre-reqs\n",
"\n",
"# import sys\n",
"\n",
"# !{sys.executable} -m pip install --upgrade -r requirements.txt"
"# !{sys.executable} -m pip install --upgrade -r requirements.txt\n",
"# !{sys.executable} -m pip install opencv-python\n",
"# !{sys.executable} -m pip install tensorflow"
]
},
{
Expand All @@ -63,32 +65,11 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-01-06 13:16:09.470934: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
"2024-01-06 13:16:09.515298: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
"2024-01-06 13:16:09.516233: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
"2024-01-06 13:16:10.292374: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow/python/compat/v2_compat.py:107: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"non-resource variables are not supported in the long term\n"
]
}
],
"outputs": [],
"source": [
"import json\n",
"import os\n",
Expand All @@ -97,34 +78,123 @@
"import numpy as np\n",
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"from pathlib import Path\n",
"\n",
"import cv2\n",
"\n",
"import tensorflow.compat.v1 as tf\n",
"tf.disable_v2_behavior()\n",
"from tensorflow.compat.v1.io.gfile import GFile\n",
"\n",
"from deepracer.model import load_session, visualize_gradcam_discrete_ppo, rgb2gray"
"from deepracer.model import load_session, visualize_gradcam_discrete_ppo, rgb2gray\n",
"\n",
"import boto3\n",
"s3_resource = boto3.resource('s3')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure and load files\n",
"\n",
"Provide the paths where the image and models are stored. Also define which iterations you would like to review."
"# Use example files to understand how the notebook works\n",
"Only run this cell to use the example files"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# # Example / Alternative for logs on file-system\n",
"img_selection = 'logs/sample-model/pictures/*.png'\n",
MarkRoss-Eviden marked this conversation as resolved.
Show resolved Hide resolved
"model_path = 'logs/sample-model/model'\n",
"iterations = [15, 30, 48]"
"iterations = [15, 30, 48]\n",
"model_type = 'example'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Advanced - Fetch your own models from S3 / Minio"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Login\n",
"\n",
"Login to AWS. There are several ways to log in:\n",
"1. On EC2 instance or Sagemaker Notebook with correct IAM execution role assigned.\n",
"2. AWS credentials available in `.aws/` through using the `aws configure` command. (DeepRacer-for-Cloud's `dr-start-loganalysis` supports this)\n",
"3. Setting the relevant environment variables by uncommenting the below section."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"AWS_DEFAULT_REGION\"] = \"\" #<-Add your region\n",
"# os.environ[\"AWS_ACCESS_KEY_ID\"] = \"\" #<-Add your access key\n",
"# os.environ[\"AWS_SECRET_ACCESS_KEY\"] = \"\" #<-Add you secret access key\n",
"# os.environ[\"AWS_SESSION_TOKEN\"] = \"\" #<-Add your session key if you have one"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure S3 to get the models\n",
"\n",
"Depending on which way you are training your model, you will need a slightly different way to load the data.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"parameters"
]
},
"outputs": [],
"source": [
"PREFIX='model-name' # Name of the model, without trailing '/'\n",
"BUCKET='bucket' # Bucket name is default 'bucket' when training locally\n",
"PROFILE=None # The credentials profile in .aws - 'minio' for local training\n",
"S3_ENDPOINT_URL=None # Endpoint URL: None for AWS S3, 'http://minio:9000' for local training\n",
"iterations = [1, 2, 3] #enter the numbers of your iterations you want to try (must exist in the model folder in S3)\n",
"img_selection = 'logs/sample-model/pictures/*.png' # replace with your own images as appropriate\n",
"model_type = 's3'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# ### Configure and load files\n",
"#\n",
"if model_type=='s3':\n",
" model_path = 'logs/' + PREFIX\n",
" Path(model_path).mkdir(parents=True, exist_ok=True)\n",
" s3_resource.Object(BUCKET, PREFIX + '/model/model_metadata.json').download_file(\n",
" f'logs/{PREFIX}/model_metadata.json')\n",
" for i in iterations:\n",
" s3_resource.Object(BUCKET, PREFIX + '/model/model_' + str(i) + '.pb').download_file(\n",
" f'logs/{PREFIX}/' + 'model_' + str(i) + '.pb')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Load the models and pictures"
]
},
{
Expand Down Expand Up @@ -394,7 +464,7 @@
],
"source": [
"heatmaps = []\n",
"view_models = models_file_path[1:3]\n",
"view_models = models_file_path[0:len(iterations)]\n",
"\n",
"for model_file in view_models:\n",
" model, obs, model_out = load_session(model_file, my_sensor, False)\n",
Expand Down Expand Up @@ -468,7 +538,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
"version": "3.9.16"
}
},
"nbformat": 4,
Expand Down
59 changes: 55 additions & 4 deletions Visual_Analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@
# import sys

# # !{sys.executable} -m pip install --upgrade -r requirements.txt
# # !{sys.executable} -m pip install opencv-python
# # !{sys.executable} -m pip install tensorflow
# -

#
Expand All @@ -62,6 +64,7 @@
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pathlib import Path

import cv2

Expand All @@ -70,15 +73,63 @@
from tensorflow.compat.v1.io.gfile import GFile

from deepracer.model import load_session, visualize_gradcam_discrete_ppo, rgb2gray

import boto3
s3_resource = boto3.resource('s3')
# -

# ## Configure and load files
#
# Provide the paths where the image and models are stored. Also define which iterations you would like to review.
# # Use example files to understand how the notebook works
# Only run this cell to use the example files

# # Example / Alternative for logs on file-system
img_selection = 'logs/sample-model/pictures/*.png'
model_path = 'logs/sample-model/model'
iterations = [15, 30, 48]
model_type = 'example'

# # Advanced - Fetch your own models from S3 / Minio

# ### Login
#
# Login to AWS. There are several ways to log in:
# 1. On EC2 instance or Sagemaker Notebook with correct IAM execution role assigned.
# 2. AWS credentials available in `.aws/` through using the `aws configure` command. (DeepRacer-for-Cloud's `dr-start-loganalysis` supports this)
# 3. Setting the relevant environment variables by uncommenting the below section.

# +
# os.environ["AWS_DEFAULT_REGION"] = "" #<-Add your region
# os.environ["AWS_ACCESS_KEY_ID"] = "" #<-Add your access key
# os.environ["AWS_SECRET_ACCESS_KEY"] = "" #<-Add you secret access key
# os.environ["AWS_SESSION_TOKEN"] = "" #<-Add your session key if you have one
# -

# ### Configure S3 to get the models
#
# Depending on which way you are training your model, you will need a slightly different way to load the data.
#

# + tags=["parameters"]
PREFIX='model-name' # Name of the model, without trailing '/'
BUCKET='bucket' # Bucket name is default 'bucket' when training locally
PROFILE=None # The credentials profile in .aws - 'minio' for local training
S3_ENDPOINT_URL=None # Endpoint URL: None for AWS S3, 'http://minio:9000' for local training
iterations = [1, 2, 3] #enter the numbers of your iterations you want to try (must exist in the model folder in S3)
img_selection = 'logs/sample-model/pictures/*.png' # replace with your own images as appropriate
model_type = 's3'
# -

# ### Configure and load files
#
if model_type=='s3':
model_path = 'logs/' + PREFIX
Path(model_path).mkdir(parents=True, exist_ok=True)
s3_resource.Object(BUCKET, PREFIX + '/model/model_metadata.json').download_file(
f'logs/{PREFIX}/model_metadata.json')
for i in iterations:
s3_resource.Object(BUCKET, PREFIX + '/model/model_' + str(i) + '.pb').download_file(
f'logs/{PREFIX}/' + 'model_' + str(i) + '.pb')

# # Load the models and pictures

# Load the model metadata in, and define which sensor is in use.

Expand Down Expand Up @@ -154,7 +205,7 @@

# +
heatmaps = []
view_models = models_file_path[1:3]
view_models = models_file_path[0:len(iterations)]

for model_file in view_models:
model, obs, model_out = load_session(model_file, my_sensor, False)
Expand Down