diff --git a/docs/tutorial_api-examples.md b/docs/tutorial_api-examples.md index 1471433b..5aa72eb0 100644 --- a/docs/tutorial_api-examples.md +++ b/docs/tutorial_api-examples.md @@ -51,8 +51,8 @@ for text in streamer: To run this (it can be found [here](https://github.com/dusty-nv/jetson-containers/blob/master/packages/llm/transformers/test.py){:target="_blank"}), you can mount a directory containing the script or your jetson-containers directory: ```bash -./run.sh --volume $PWD/packages/llm:/mount --workdir /mount \ - $(./autotag l4t-text-generation) \ +jetson-containers run --volume $PWD/packages/llm:/mount --workdir /mount \ + $(autotag l4t-text-generation) \ python3 transformers/test.py ``` @@ -127,7 +127,7 @@ while True: This [example](https://github.com/dusty-nv/jetson-containers/blob/master/packages/llm/local_llm/chat/example.py){:target="_blank"} keeps an interactive chat running with text being entered from the terminal. You can start it like this: ```python -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm.chat.example ``` diff --git a/docs/tutorial_audiocraft.md b/docs/tutorial_audiocraft.md index 9cc6547d..c389ac29 100644 --- a/docs/tutorial_audiocraft.md +++ b/docs/tutorial_audiocraft.md @@ -23,9 +23,7 @@ Let's run Meta's [AudioCraft](https://github.com/facebookresearch/audiocraft), t ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ## How to start @@ -33,8 +31,7 @@ Let's run Meta's [AudioCraft](https://github.com/facebookresearch/audiocraft), t Use `run.sh` and `autotag` script to automatically pull or build a compatible container image. ``` -cd jetson-containers -./run.sh $(./autotag audiocraft) +jetson-containers run $(autotag audiocraft) ``` The container has a default run command (`CMD`) that will automatically start the Jupyter Lab server. diff --git a/docs/tutorial_live-llava.md b/docs/tutorial_live-llava.md index 2a5aeaac..d7b4e531 100644 --- a/docs/tutorial_live-llava.md +++ b/docs/tutorial_live-llava.md @@ -46,7 +46,7 @@ The interactive web UI supports event filters, alerts, and multimodal [vector DB The [VideoQuery](https://github.com/dusty-nv/jetson-containers/blob/master/packages/llm/local_llm/agents/video_query.py){:target="_blank"} agent processes an incoming camera or video feed on prompts in a closed loop with the VLM. Navigate your browser to `https://:8050` after launching it, proceed past the [SSL warning](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/local_llm#enabling-httpsssl){:target="_blank"}, and see this [demo walkthrough](https://www.youtube.com/watch?v=dRmAGGuupuE){:target="_blank"} video on using the web UI. ```bash -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm.agents.video_query --api=mlc \ --model Efficient-Large-Model/VILA-2.7b \ --max-context-len 768 \ @@ -64,9 +64,9 @@ This uses [`jetson_utils`](https://github.com/dusty-nv/jetson-utils) for video I The example above was running on a live camera, but you can also read and write a [video file or network stream](https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md) by substituting the path or URL to the `--video-input` and `--video-output` command-line arguments like this: ```bash -./run.sh \ +jetson-containers run \ -v /path/to/your/videos:/mount - $(./autotag local_llm) \ + $(autotag local_llm) \ python3 -m local_llm.agents.video_query --api=mlc \ --model Efficient-Large-Model/VILA-2.7b \ --max-new-tokens 32 \ @@ -84,7 +84,7 @@ If you launch the [VideoQuery](https://github.com/dusty-nv/jetson-containers/blo To enable this mode, first follow the [**NanoDB tutorial**](tutorial_nanodb.md) to download, index, and test the database. Then launch VideoQuery like this: ```bash -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm.agents.video_query --api=mlc \ --model Efficient-Large-Model/VILA-2.7b \ --max-context-len 768 \ diff --git a/docs/tutorial_llava.md b/docs/tutorial_llava.md index d05349c6..c2af6390 100644 --- a/docs/tutorial_llava.md +++ b/docs/tutorial_llava.md @@ -44,15 +44,13 @@ In addition to Llava, the [`local_llm`](tutorial_nano-vlm.md) pipeline supports ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ### Download Model ``` -./run.sh --workdir=/opt/text-generation-webui $(./autotag text-generation-webui) \ +jetson-containers run --workdir=/opt/text-generation-webui $(autotag text-generation-webui) \ python3 download-model.py --output=/data/models/text-generation-webui \ TheBloke/llava-v1.5-13B-GPTQ ``` @@ -60,7 +58,7 @@ In addition to Llava, the [`local_llm`](tutorial_nano-vlm.md) pipeline supports ### Start Web UI with Multimodal Extension ``` -./run.sh --workdir=/opt/text-generation-webui $(./autotag text-generation-webui) \ +jetson-containers run --workdir=/opt/text-generation-webui $(autotag text-generation-webui) \ python3 server.py --listen \ --model-dir /data/models/text-generation-webui \ --model TheBloke_llava-v1.5-13B-GPTQ \ @@ -102,7 +100,7 @@ This example uses the upstream [Llava repo](https://github.com/haotian-liu/LLaVA ### llava-v1.5-7b ``` -./run.sh $(./autotag llava) \ +jetson-containers run $(autotag llava) \ python3 -m llava.serve.cli \ --model-path liuhaotian/llava-v1.5-7b \ --image-file /data/images/hoover.jpg @@ -111,7 +109,7 @@ This example uses the upstream [Llava repo](https://github.com/haotian-liu/LLaVA ### llava-v1.5-13b ``` bash -./run.sh $(./autotag llava) \ +jetson-containers run $(autotag llava) \ python3 -m llava.serve.cli \ --model-path liuhaotian/llava-v1.5-13b \ --image-file /data/images/hoover.jpg @@ -188,7 +186,7 @@ python3 -m llava.serve.model_worker \ * [mys/ggml_llava-v1.5-13b](https://huggingface.co/mys/ggml_llava-v1.5-13b) ```bash -./run.sh --workdir=/opt/llama.cpp/bin $(./autotag llama_cpp:gguf) \ +jetson-containers run --workdir=/opt/llama.cpp/bin $(autotag llama_cpp:gguf) \ /bin/bash -c './llava-cli \ --model $(huggingface-downloader mys/ggml_llava-v1.5-13b/ggml-model-q4_k.gguf) \ --mmproj $(huggingface-downloader mys/ggml_llava-v1.5-13b/mmproj-model-f16.gguf) \ @@ -205,7 +203,7 @@ python3 -m llava.serve.model_worker \ A lower temperature like 0.1 is recommended for better quality (`--temp 0.1`), and if you omit `--prompt` it will describe the image: ```bash -./run.sh --workdir=/opt/llama.cpp/bin $(./autotag llama_cpp:gguf) \ +jetson-containers run --workdir=/opt/llama.cpp/bin $(autotag llama_cpp:gguf) \ /bin/bash -c './llava-cli \ --model $(huggingface-downloader mys/ggml_llava-v1.5-13b/ggml-model-q4_k.gguf) \ --mmproj $(huggingface-downloader mys/ggml_llava-v1.5-13b/mmproj-model-f16.gguf) \ diff --git a/docs/tutorial_minigpt4.md b/docs/tutorial_minigpt4.md index c311231a..878f47c7 100644 --- a/docs/tutorial_minigpt4.md +++ b/docs/tutorial_minigpt4.md @@ -27,9 +27,7 @@ Give your locally running LLM an access to vision, by running [MiniGPT-4](https: ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ## Start `minigpt4` container with models @@ -37,8 +35,7 @@ Give your locally running LLM an access to vision, by running [MiniGPT-4](https: To start the MiniGPT4 container and webserver with the recommended models, run this command: ``` -cd jetson-containers -./run.sh $(./autotag minigpt4) /bin/bash -c 'cd /opt/minigpt4.cpp/minigpt4 && python3 webui.py \ +jetson-containers run $(autotag minigpt4) /bin/bash -c 'cd /opt/minigpt4.cpp/minigpt4 && python3 webui.py \ $(huggingface-downloader --type=dataset maknee/minigpt4-13b-ggml/minigpt4-13B-f16.bin) \ $(huggingface-downloader --type=dataset maknee/ggml-vicuna-v0-quantized/ggml-vicuna-13B-v0-q5_k.bin)' ``` diff --git a/docs/tutorial_nano-vlm.md b/docs/tutorial_nano-vlm.md index 798fee84..52dd248d 100644 --- a/docs/tutorial_nano-vlm.md +++ b/docs/tutorial_nano-vlm.md @@ -46,7 +46,7 @@ The optimized [`local_llm`](https://github.com/dusty-nv/jetson-containers/tree/m ``` bash -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm --api=mlc \ --model liuhaotian/llava-v1.6-vicuna-7b \ --max-context-len 768 \ @@ -62,7 +62,7 @@ You'll end up at a `>> PROMPT:` in which you can enter the path or URL of an ima During testing, you can specify prompts on the command-line that will run sequentially: ``` -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm --api=mlc \ --model liuhaotian/llava-v1.6-vicuna-7b \ --max-context-len 768 \ @@ -90,7 +90,7 @@ You can also use [`--prompt /data/prompts/images.json`](https://github.com/dusty When prompted, these models can also output in constrained JSON formats (which the LLaVA authors cover in their [LLaVA-1.5 paper](https://arxiv.org/abs/2310.03744)), and can be used to programatically query information about the image: ``` -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm --api=mlc \ --model liuhaotian/llava-v1.5-13b \ --prompt '/data/images/hoover.jpg' \ @@ -114,7 +114,7 @@ To use local_llm with a web UI instead, see the [Voice Chat](https://github.com/ These models can also be used with the [Live Llava](tutorial_live-llava.md) agent for continuous streaming - just substitute the desired model name below: ``` bash -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm.agents.video_query --api=mlc \ --model Efficient-Large-Model/VILA-2.7b \ --max-context-len 768 \ diff --git a/docs/tutorial_nanodb.md b/docs/tutorial_nanodb.md index 67b00154..ef83a806 100644 --- a/docs/tutorial_nanodb.md +++ b/docs/tutorial_nanodb.md @@ -27,9 +27,7 @@ Let's run [NanoDB](https://github.com/dusty-nv/jetson-containers/blob/master/pac ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ## How to start @@ -69,7 +67,7 @@ This allow you to skip the [indexing process](#indexing-data) in the next step, If you didn't download the [NanoDB index](#download-index) for COCO from above, we need to build the index by scanning your dataset directory: ``` -./run.sh $(./autotag nanodb) \ +jetson-containers run $(autotag nanodb) \ python3 -m nanodb \ --scan /data/datasets/coco/2017 \ --path /data/nanodb/coco/2017 \ @@ -98,7 +96,7 @@ You can press ++ctrl+c++ to exit. For more info about the various options availa Spin up the Gradio server: ``` -./run.sh $(./autotag nanodb) \ +jetson-containers run $(autotag nanodb) \ python3 -m nanodb \ --path /data/nanodb/coco/2017 \ --server --port=7860 diff --git a/docs/tutorial_slm.md b/docs/tutorial_slm.md index d3431d54..bc2882ec 100644 --- a/docs/tutorial_slm.md +++ b/docs/tutorial_slm.md @@ -42,19 +42,17 @@ Based on user interactions, the recommended models to try are [`stabilityai/stab ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` 5. If you had previously used [`local_llm`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/local_llm){:target="_blank"} container, update it first: - - `sudo docker pull $(./autotag local_llm)` + - `sudo docker pull $(autotag local_llm)` The [`local_llm.chat`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/local_llm#text-chat){:target="_blank"} program will automatically download and quantize models from HuggingFace like those listed in the table above: ```bash -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm.chat --api=mlc \ --model princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT ``` @@ -70,7 +68,7 @@ This will enter into interactive mode where you chat back and forth using the ke During testing, you can specify prompts on the command-line that will run sequentially: ```bash -./run.sh $(./autotag local_llm) \ +jetson-containers run $(autotag local_llm) \ python3 -m local_llm.chat --api=mlc \ --model stabilityai/stablelm-zephyr-3b \ --max-new-tokens 512 \ diff --git a/docs/tutorial_stable-diffusion.md b/docs/tutorial_stable-diffusion.md index 2f5011d9..4189cd48 100644 --- a/docs/tutorial_stable-diffusion.md +++ b/docs/tutorial_stable-diffusion.md @@ -25,30 +25,25 @@ Let's run AUTOMATIC1111's [`stable-diffusion-webui`](https://github.com/AUTOMATI ## Setup a container for stable-diffusion-webui -The [jetson-containers](https://github.com/dusty-nv/jetson-containers) project provides pre-built Docker images for [`stable-diffusion-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/diffusion/stable-diffusion-webui). You can clone the repo to use its utilities that will automatically pull/start the correct container for you, or you can do it [manually](https://github.com/dusty-nv/jetson-containers/tree/master/packages/diffusion/stable-diffusion-webui#user-content-run). +The [jetson-containers](https://github.com/dusty-nv/jetson-containers){:target="_blank"} project provides pre-built Docker images for [`stable-diffusion-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/diffusion/stable-diffusion-webui){:target="_blank"}. You can clone the repo to use its utilities that will automatically pull/start the correct container for you, or you can do it [manually](https://github.com/dusty-nv/jetson-containers/tree/master/packages/diffusion/stable-diffusion-webui#user-content-run){:target="_blank"}. ``` git clone https://github.com/dusty-nv/jetson-containers -cd jetson-containers -sudo apt update; sudo apt install -y python3-pip -pip3 install -r requirements.txt +bash jetson-containers/install.sh ``` !!! info - **JetsonHacks** provides an informative walkthrough video on [`jetson-containers`](https://github.com/dusty-nv/jetson-containers), showcasing the usage of both the [`stable-diffusion-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/diffusion/stable-diffusion-webui) and [`text-generation-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui) containers. You can find the complete article with detailed instructions [here](https://jetsonhacks.com/2023/09/04/use-these-jetson-docker-containers-tutorial/). + **JetsonHacks** provides an informative walkthrough video on jetson-containers, showcasing the usage of both the `stable-diffusion-webui` and `text-generation-webui`. You can find the complete article with detailed instructions [here](https://jetsonhacks.com/2023/09/04/use-these-jetson-docker-containers-tutorial/). ## How to start -If you are running this for the first time, go through the [pre-setup](https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md) and see the [`jetson-containers/stable-diffusion-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/diffusion/stable-diffusion-webui) readme. - -Use `run.sh` and `autotag` script to automatically pull or build a compatible container image: +Use `jetson-containers run` and `autotag` tools to automatically pull or build a compatible container image: ``` -cd jetson-containers -./run.sh $(./autotag stable-diffusion-webui) +jetson-containers run $(autotag stable-diffusion-webui) ``` The container has a default run command (`CMD`) that will automatically start the webserver like this: diff --git a/docs/tutorial_text-generation.md b/docs/tutorial_text-generation.md index 00bae771..63cee0b0 100644 --- a/docs/tutorial_text-generation.md +++ b/docs/tutorial_text-generation.md @@ -27,30 +27,25 @@ Interact with a local AI assistant by running a LLM with oobabooga's [`text-gene ## Set up a container for text-generation-webui -The [jetson-containers](https://github.com/dusty-nv/jetson-containers) project provides pre-built Docker images for [`text-generation-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui) along with all of the loader API's built with CUDA enabled (llama.cpp, ExLlama, AutoGPTQ, Transformers, ect). You can clone the repo to use its utilities that will automatically pull/start the correct container for you, or you can do it [manually](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui#user-content-run). +The [jetson-containers](https://github.com/dusty-nv/jetson-containers){:target="_blank"} project provides pre-built Docker images for [`text-generation-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui){:target="_blank"} along with all of the loader API's built with CUDA enabled (llama.cpp, ExLlama, AutoGPTQ, Transformers, ect). You can clone the repo to use its utilities that will automatically pull/start the correct container for you, or you can do it [manually](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui#user-content-run){:target="_blank"}. ``` -git clone --depth=1 https://github.com/dusty-nv/jetson-containers -cd jetson-containers -sudo apt update; sudo apt install -y python3-pip -pip3 install -r requirements.txt +git clone https://github.com/dusty-nv/jetson-containers +bash jetson-containers/install.sh ``` !!! info - **JetsonHacks** provides an informative walkthrough video on [`jetson-containers`](https://github.com/dusty-nv/jetson-containers), showcasing the usage of both the [`stable-diffusion-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/diffusion/stable-diffusion-webui) and [`text-generation-webui`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui) containers. You can find the complete article with detailed instructions [here](https://jetsonhacks.com/2023/09/04/use-these-jetson-docker-containers-tutorial/). + **JetsonHacks** provides an informative walkthrough video on jetson-containers, showcasing the usage of both the `stable-diffusion-webui` and `text-generation-webui`. You can find the complete article with detailed instructions [here](https://jetsonhacks.com/2023/09/04/use-these-jetson-docker-containers-tutorial/). ## How to start -> If you are running this for the first time, go through the [pre-setup](https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md) and see the [`jetson-containers/text-generation-webui`](https://github.com/dusty-nv/jetson-containers/blob/master/packages/llm/text-generation-webui/README.md) container readme. - -Use `run.sh` and `autotag` script to automatically pull or build a compatible container image: +Use `jetson-containers run` and `autotag` tools to automatically pull or build a compatible container image: ``` -cd jetson-containers -./run.sh $(./autotag text-generation-webui) +jetson-containers run $(autotag text-generation-webui) ``` The container has a default run command (`CMD`) that will automatically start the webserver like this: @@ -69,7 +64,7 @@ Open your browser and access `http://:7860`. See the [oobabooga documentation](https://github.com/oobabooga/text-generation-webui/tree/main#downloading-models) for instructions for downloading models - either from within the web UI, or using [`download-model.py`](https://github.com/oobabooga/text-generation-webui/blob/main/download-model.py) ```bash -./run.sh --workdir=/opt/text-generation-webui $(./autotag text-generation-webui) /bin/bash -c \ +jetson-containers run --workdir=/opt/text-generation-webui $(./autotag text-generation-webui) /bin/bash -c \ 'python3 download-model.py --output=/data/models/text-generation-webui TheBloke/Llama-2-7b-Chat-GPTQ' ``` diff --git a/docs/tutorial_whisper.md b/docs/tutorial_whisper.md index c2050e85..5f60e051 100644 --- a/docs/tutorial_whisper.md +++ b/docs/tutorial_whisper.md @@ -25,9 +25,7 @@ Let's run OpenAI's [Whisper](https://github.com/openai/whisper), pre-trained mod ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ## How to start @@ -35,8 +33,7 @@ Let's run OpenAI's [Whisper](https://github.com/openai/whisper), pre-trained mod Use `run.sh` and `autotag` script to automatically pull or build a compatible container image. ``` -cd jetson-containers -./run.sh $(./autotag whisper) +jetson-containers run $(autotag whisper) ``` The container has a default run command (`CMD`) that will automatically start the Jupyter Lab server, with SSL enabled. diff --git a/docs/vit/tutorial_efficientvit.md b/docs/vit/tutorial_efficientvit.md index 57f8d039..3450c574 100644 --- a/docs/vit/tutorial_efficientvit.md +++ b/docs/vit/tutorial_efficientvit.md @@ -14,7 +14,8 @@ Let's run MIT Han Lab's [EfficientViT](https://github.com/mit-han-lab/efficientv 2. Running one of the following versions of [JetPack](https://developer.nvidia.com/embedded/jetpack): JetPack 5 (L4T r35.x) - + JetPack 6 (L4T r36.x) + 3. Sufficient storage space (preferably with NVMe SSD). - `10.9 GB` for `efficientvit` container image @@ -24,25 +25,20 @@ Let's run MIT Han Lab's [EfficientViT](https://github.com/mit-han-lab/efficientv ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ## How to start -Use `run.sh` and `autotag` script to automatically pull or build a compatible container image. +Use the `jetson-containers run` and `autotag` commands to automatically pull or build a compatible container image. ``` -cd jetson-containers -./run.sh $(./autotag efficientvit) +jetson-containers run $(autotag efficientvit) ``` ## Usage of EfficientViT -The official EfficientViT repo shows the complete usage information. - -[https://github.com/mit-han-lab/efficientvit#usage](https://github.com/mit-han-lab/efficientvit#usage) +The official EfficientViT repo shows the complete usage information: [`https://github.com/mit-han-lab/efficientvit#usage`](https://github.com/mit-han-lab/efficientvit#usage) ## Run example/benchmark @@ -81,7 +77,7 @@ Memory consumption : 3419.68 MB The output image file (of the last inference result) is stored as `/data/benchmarks/efficientvit_sam_demo.png`. It is stored under `/data/` directory that is mounted from the Docker host.
-So you can go back to your host machine, and check `./jetson-containers/data/benchmark/` directory. +So you can go back to your host machine, and check `jetson-containers/data/benchmark/` directory. You should find the output like this. diff --git a/docs/vit/tutorial_nanoowl.md b/docs/vit/tutorial_nanoowl.md index 81d0b8d9..f58510e5 100644 --- a/docs/vit/tutorial_nanoowl.md +++ b/docs/vit/tutorial_nanoowl.md @@ -27,18 +27,15 @@ Let's run [NanoOWL](https://github.com/NVIDIA-AI-IOT/nanoowl), [OWL-ViT](https:/ ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ## How to start -Use `run.sh` and `autotag` script to automatically pull or build a compatible container image. +Use the `jetson-containers run` and `autotag` commands to automatically pull or build a compatible container image. ``` -cd jetson-containers -./run.sh $(./autotag nanoowl) +jetson-containers run $(autotag nanoowl) ``` ## How to run the tree prediction (live camera) example diff --git a/docs/vit/tutorial_nanosam.md b/docs/vit/tutorial_nanosam.md index 4f88b086..48c71120 100644 --- a/docs/vit/tutorial_nanosam.md +++ b/docs/vit/tutorial_nanosam.md @@ -27,18 +27,15 @@ Let's run NVIDIA's [NanoSAM](https://github.com/NVIDIA-AI-IOT/nanosam) to check ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ## How to start -Use `run.sh` and `autotag` script to automatically pull or build a compatible container image. +Use the `jetson-containers run` and `autotag` commands to automatically pull or build a compatible container image. ``` -cd jetson-containers -./run.sh $(./autotag nanosam) +jetson-containers run $(autotag nanosam) ``` ## Run examples @@ -65,7 +62,7 @@ To check on your host machine, you can copy that into `/data` directory of the c cp data/basic_usage_out.jpg /data/ ``` -Then you can go to your host system, and find the file under the `jetson_containers`' `data` directory, like `jetson_containers/data/basic_usage_out.jpg`. +Then you can go to your host system, and find the file under `jetson-containers/data/basic_usage_out.jpg` ## Results diff --git a/docs/vit/tutorial_sam.md b/docs/vit/tutorial_sam.md index 4e8b43f8..45dd6bb6 100644 --- a/docs/vit/tutorial_sam.md +++ b/docs/vit/tutorial_sam.md @@ -27,9 +27,7 @@ Let's run Meta's [`SAM`](https://github.com/facebookresearch/segment-anything) o ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` [^1]: The biggest `vit_h` (2.4GB) model may not ran due to OOM, but `vit_l` (1.1GB) runs on Jetson Orin Nano. @@ -37,11 +35,10 @@ Let's run Meta's [`SAM`](https://github.com/facebookresearch/segment-anything) o ## How to start -Use `run.sh` and `autotag` script to automatically pull or build a compatible container image. +Use the `jetson-containers run` and `autotag` commands to automatically pull or build a compatible container image. ``` -cd jetson-containers -./run.sh $(./autotag sam) +jetson-containers run $(autotag sam) ``` The container has a default run command (`CMD`) that will automatically start the Jupyter Lab server. diff --git a/docs/vit/tutorial_tam.md b/docs/vit/tutorial_tam.md index 52ade2d5..5d8c7f0f 100644 --- a/docs/vit/tutorial_tam.md +++ b/docs/vit/tutorial_tam.md @@ -24,18 +24,15 @@ Let's run [`TAM`](https://github.com/gaomingqi/Track-Anything) to perform Segmen ```bash git clone https://github.com/dusty-nv/jetson-containers - cd jetson-containers - sudo apt update; sudo apt install -y python3-pip - pip3 install -r requirements.txt + bash jetson-containers/install.sh ``` ## How to start -Use `run.sh` and `autotag` script to automatically pull or build a compatible container image. +Use the `jetson-containers run` and `autotag` commands to automatically pull or build a compatible container image. ``` -cd jetson-containers -./run.sh $(./autotag tam) +jetson-containers run $(autotag tam) ``` The container has a default run command (`CMD`) that will automatically start TAM's web server. @@ -88,5 +85,5 @@ mv E2FGVI-HQ-CVPR22.pth ./data/models/tam/ And you can try running the TAM container. ``` -./run.sh $(./autotag tam) +jetson-containers run $(autotag tam) ``` \ No newline at end of file