Skip to content

Commit

Permalink
Update GH link in docs (#5496)
Browse files Browse the repository at this point in the history
* Update GH link in docs (#5493)

Summary:
Should use the raw link instead of GH web link

Pull Request resolved: #5493

Reviewed By: shoumikhin

Differential Revision: D63040432

Pulled By: kirklandsign

fbshipit-source-id: f6b8f1ec4fe2d7ac1c5f25cc1c727279a9d20065
(cherry picked from commit 16673f9)

* Fix link

---------

Co-authored-by: Hansong Zhang <[email protected]>
Co-authored-by: Hansong Zhang <[email protected]>
  • Loading branch information
3 people authored Sep 20, 2024
1 parent ffbe90b commit 0866c52
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 10 deletions.
12 changes: 6 additions & 6 deletions examples/demo-apps/android/LlamaDemo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Below are the UI features for the app.

Select the settings widget to get started with picking a model, its parameters and any prompts.
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/opening_the_app_details.png" width=800>
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/opening_the_app_details.png" width=800>
</p>


Expand All @@ -55,7 +55,7 @@ Select the settings widget to get started with picking a model, its parameters a

Once you've selected the model, tokenizer, and model type you are ready to click on "Load Model" to have the app load the model and go back to the main Chat activity.
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/settings_menu.png" width=300>
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/settings_menu.png" width=300>
</p>


Expand Down Expand Up @@ -87,12 +87,12 @@ int loadResult = mModule.load();
### User Prompt
Once model is successfully loaded then enter any prompt and click the send (i.e. generate) button to send it to the model.
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/load_complete_and_start_prompt.png" width=300>
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/load_complete_and_start_prompt.png" width=300>
</p>

You can provide it more follow-up questions as well.
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/chat.png" width=300>
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/chat.png" width=300>
</p>

> [!TIP]
Expand All @@ -109,14 +109,14 @@ mModule.generate(prompt,sequence_length, MainActivity.this);
For LLaVA-1.5 implementation, select the exported LLaVA .pte and tokenizer file in the Settings menu and load the model. After this you can send an image from your gallery or take a live picture along with a text prompt to the model.

<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/llava_example.png" width=300>
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/llava_example.png" width=300>
</p>


### Output Generated
To show completion of the follow-up question, here is the complete detailed response from the model.
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/chat_response.png" width=300>
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/chat_response.png" width=300>
</p>

> [!TIP]
Expand Down
8 changes: 4 additions & 4 deletions examples/demo-apps/apple_ios/LLaMA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,11 @@ rm -rf \
* Ensure that the ExecuTorch package dependencies are installed correctly, then select which ExecuTorch framework should link against which target.
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" width="600">
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" width="600">
</p>
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/ios_demo_app_choosing_package.png" alt="iOS LLaMA App Choosing package" width="600">
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_choosing_package.png" alt="iOS LLaMA App Choosing package" width="600">
</p>
* Run the app. This builds and launches the app on the phone.
Expand All @@ -76,13 +76,13 @@ rm -rf \
If the app successfully run on your device, you should see something like below:
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/ios_demo_app.jpg" alt="iOS LLaMA App" width="300">
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app.jpg" alt="iOS LLaMA App" width="300">
</p>
For Llava 1.5 models, you can select and image (via image/camera selector button) before typing prompt and send button.
<p align="center">
<img src="https://github.com/pytorch/executorch/blob/main/docs/source/_static/img/ios_demo_app_llava.jpg" alt="iOS LLaMA App" width="300">
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_llava.jpg" alt="iOS LLaMA App" width="300">
</p>
## Reporting Issues
Expand Down

0 comments on commit 0866c52

Please sign in to comment.