From 89fc86c1a0ea8f9e6eaabf39d433015dc6681c0f Mon Sep 17 00:00:00 2001 From: GitHub Actions Date: Tue, 15 Oct 2024 09:28:48 +0000 Subject: [PATCH] Deployed 94204e1dc74b17fe08eea34b7442727849593171 with mkdocs-material v9.5.40 and HTML post-processing --- lerobot.html | 12 ++++++++---- search/search_index.json | 2 +- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/lerobot.html b/lerobot.html index d0cc90e..1f15f88 100644 --- a/lerobot.html +++ b/lerobot.html @@ -1064,7 +1064,10 @@

- JetPack 6 (L4T r36.x) + JetPack 6 GA (L4T r36.3) + + + JetPack 6.1 (L4T r36.4)

@@ -1815,13 +1818,14 @@

--robot-path lerobot/configs/robot/koch.yaml \ --fps 30 \ --root data \ - --repo-id ${HF_USER}/koch_test_01 \ - --tags tutorial \ + --repo-id ${HF_USER}/eval_koch_test_01 \ + --tags tutorial eval \ --warmup-time-s 5 \ --episode-time-s 30 \ --reset-time-s 30 \ --num-episodes 10 \ - -p outputs/train/act_koch_test/checkpoints/last/pretrained_model + -p outputs/train/act_koch_test/checkpoints/last/pretrained_model \ + --run-compute-stats 0 diff --git a/search/search_index.json b/search/search_index.json index acece88..ab7d529 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"agent_studio.html","title":"Agent Studio","text":"

Rapidly design and experiment with creating your own automation agents, personal assistants, and edge AI systems in an interactive sandbox for connecting multimodal LLMs, speech and vision transformers, vector databases, prompt templating, and function calling to live sensors and I/O. Optimized for deployment onboard Jetson with on-device compute, low-latency streaming, and unified memory.

\u2022 Edge LLM inference with quantization and KV caching (NanoLLM) \u2022 Realtime vision/language models (ala Live Llava and Video VILA) \u2022 Speech recognition and synthesis (Whisper ASR, Piper TTS, Riva) \u2022 Multimodal vector database from NanoDB \u2022 Audio and video streaming (WebRTC, RTP, RTSP, V4L2) \u2022 Performance monitoring and profiling \u2022 Native bot-callable functions and agent tools \u2022 Extensible plugins with auto-generated UI controls \u2022 Save, load, and export pipeline presets \u2022 Native bot-callable functions and agent tools"},{"location":"agent_studio.html#running-agent-studio","title":"Running Agent Studio","text":"

What you need

  1. One of the following Jetson devices:

    Jetson AGX Orin (64GB) Jetson AGX Orin (32GB) Jetson Orin NX (16GB) Jetson Orin Nano (8GB)

  2. Running one of the following versions of JetPack:

    JetPack 6 (L4T r36.x)

  3. Sufficient storage space (preferably with NVMe SSD).

    • 22GB for nano_llm container image
    • Space for models (>5GB)
  4. Clone and setup jetson-containers:

    git clone https://github.com/dusty-nv/jetson-containers\nbash jetson-containers/install.sh\n
jetson-containers run --env HUGGINGFACE_TOKEN=hf_xyz123abc456 \\\n  $(autotag nano_llm) \\\n    python3 -m nano_llm.studio\n

This will start the server running on your device. You can then navigate your browser to https://IP_ADDRESS:8050