From 0866c52e618e86107eb380bd1e7ac8dd398feae8 Mon Sep 17 00:00:00 2001 From: pytorchbot Date: Fri, 20 Sep 2024 14:01:56 -0700 Subject: [PATCH] Update GH link in docs (#5496) * Update GH link in docs (#5493) Summary: Should use the raw link instead of GH web link Pull Request resolved: https://github.com/pytorch/executorch/pull/5493 Reviewed By: shoumikhin Differential Revision: D63040432 Pulled By: kirklandsign fbshipit-source-id: f6b8f1ec4fe2d7ac1c5f25cc1c727279a9d20065 (cherry picked from commit 16673f964912169261bfbaa46f7147a24766cc9b) * Fix link --------- Co-authored-by: Hansong Zhang Co-authored-by: Hansong Zhang --- examples/demo-apps/android/LlamaDemo/README.md | 12 ++++++------ examples/demo-apps/apple_ios/LLaMA/README.md | 8 ++++---- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/examples/demo-apps/android/LlamaDemo/README.md b/examples/demo-apps/android/LlamaDemo/README.md index 397d677a5b..41b030cef0 100644 --- a/examples/demo-apps/android/LlamaDemo/README.md +++ b/examples/demo-apps/android/LlamaDemo/README.md @@ -46,7 +46,7 @@ Below are the UI features for the app. Select the settings widget to get started with picking a model, its parameters and any prompts.

- +

@@ -55,7 +55,7 @@ Select the settings widget to get started with picking a model, its parameters a Once you've selected the model, tokenizer, and model type you are ready to click on "Load Model" to have the app load the model and go back to the main Chat activity.

- +

@@ -87,12 +87,12 @@ int loadResult = mModule.load(); ### User Prompt Once model is successfully loaded then enter any prompt and click the send (i.e. generate) button to send it to the model.

- +

You can provide it more follow-up questions as well.

- +

> [!TIP] @@ -109,14 +109,14 @@ mModule.generate(prompt,sequence_length, MainActivity.this); For LLaVA-1.5 implementation, select the exported LLaVA .pte and tokenizer file in the Settings menu and load the model. After this you can send an image from your gallery or take a live picture along with a text prompt to the model.

- +

### Output Generated To show completion of the follow-up question, here is the complete detailed response from the model.

- +

> [!TIP] diff --git a/examples/demo-apps/apple_ios/LLaMA/README.md b/examples/demo-apps/apple_ios/LLaMA/README.md index efa22dcfa7..b26dd11203 100644 --- a/examples/demo-apps/apple_ios/LLaMA/README.md +++ b/examples/demo-apps/apple_ios/LLaMA/README.md @@ -51,11 +51,11 @@ rm -rf \ * Ensure that the ExecuTorch package dependencies are installed correctly, then select which ExecuTorch framework should link against which target.

-iOS LLaMA App Swift PM +iOS LLaMA App Swift PM

-iOS LLaMA App Choosing package +iOS LLaMA App Choosing package

* Run the app. This builds and launches the app on the phone. @@ -76,13 +76,13 @@ rm -rf \ If the app successfully run on your device, you should see something like below:

-iOS LLaMA App +iOS LLaMA App

For Llava 1.5 models, you can select and image (via image/camera selector button) before typing prompt and send button.

-iOS LLaMA App +iOS LLaMA App

## Reporting Issues