Skip to content

Commit

Permalink
Merge pull request #544 from komyg/update-quickstart-docs
Browse files Browse the repository at this point in the history
Update docs for easier quickstart
  • Loading branch information
PawelPeczek-Roboflow authored Aug 6, 2024
2 parents 04da966 + 60983a6 commit 8236922
Show file tree
Hide file tree
Showing 3 changed files with 35 additions and 23 deletions.
6 changes: 5 additions & 1 deletion docs/foundation/cogvlm.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,13 @@ We recommend using CogVLM paired with inference HTTP API adjusted to run in GPU
with our `inference-cli` tool. Run the following command to set up environment and run the API under
`http://localhost:9001`

!!! warning
Make sure that you are running this at machine with an NVidia GPU! Otherwise CogVLM will not be available.


```bash
pip install inference inference-cli inference-sdk
inference server start # make sure that you are running this at machine with GPU! Otherwise CogVLM will not be available
inference server start
```

Let's ask a question about the following image:
Expand Down
31 changes: 19 additions & 12 deletions docs/quickstart/run_a_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,24 @@ In the code above, we loaded a model and then we used that model's `infer(...)`
Running inference is fun but it's not much to look at. Let's add some code to visualize our results.

```python
from inference import get_model
from io import BytesIO

import requests
import supervision as sv
import cv2
from inference import get_model
from PIL import Image
from PIL.ImageFile import ImageFile

# define the image url to use for inference
image_file = "people-walking.jpg"
image = cv2.imread(image_file)

def load_image_from_url(url: str) -> ImageFile:
response = requests.get(url)
response.raise_for_status() # check if the request was successful
image = Image.open(BytesIO(response.content))
return image


# load the image from an url
image = load_image_from_url("https://media.roboflow.com/inference/people-walking.jpg")

# load a pre-trained yolov8n model
model = get_model(model_id="yolov8n-640")
Expand All @@ -59,21 +70,17 @@ results = model.infer(image)[0]
detections = sv.Detections.from_inference(results)

# create supervision annotators
bounding_box_annotator = sv.BoundingBoxAnnotator()
bounding_box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()

# annotate the image with our inference results
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
annotated_image = bounding_box_annotator.annotate(scene=image, detections=detections)
annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections)

# display the image
sv.plot_image(annotated_image)
```

The `people-walking.jpg` file is hosted <a href="https://media.roboflow.com/inference/people-walking.jpg" target="_blank">here</a>.

![People Walking Annotated](https://storage.googleapis.com/com-roboflow-marketing/inference/people-walking-annotated.jpg)

## Summary
Expand Down
21 changes: 11 additions & 10 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,10 @@ nav:
- Enterprise Features: enterprise/enterprise.md
- Inference Basics:
- Roboflow Ecosystem: quickstart/roboflow_ecosystem.md
- "Models: Popular": quickstart/aliases.md
- "Models: Fine-tuned": quickstart/explore_models.md
- "Models: Universe": quickstart/load_from_universe.md
- "Models: Local Weights": models/from_local_weights.md
- 'Models: Popular': quickstart/aliases.md
- 'Models: Fine-tuned': quickstart/explore_models.md
- 'Models: Universe': quickstart/load_from_universe.md
- 'Models: Local Weights': models/from_local_weights.md
- Supported Fine-Tuned Models:
- YOLOv10: fine-tuned/yolov10.md
- YOLOv9: fine-tuned/yolov9.md
Expand All @@ -56,6 +56,7 @@ nav:
- Segment Anything 2 (Segmentation): foundation/sam2.md
- YOLO-World (Object Detection): foundation/yolo_world.md
- Run a Model:
- Getting started: quickstart/run_a_model.md
- Predict on an Image Over HTTP: quickstart/run_model_on_image.md
- Predict on a Video, Webcam or RTSP Stream: quickstart/run_model_on_rtsp_webcam.md
- Predict Over UDP: quickstart/run_model_over_udp.md
Expand Down Expand Up @@ -104,7 +105,7 @@ nav:
- Cookbooks: cookbooks.md

theme:
name: "material"
name: 'material'
logo: inference-icon.png
favicon: inference-icon.png
custom_dir: docs/theme
Expand All @@ -124,13 +125,13 @@ theme:

palette:
- scheme: default
primary: "custom"
primary: 'custom'
toggle:
icon: material/brightness-7
name: Switch to dark mode

- scheme: slate
primary: "custom"
primary: 'custom'
toggle:
icon: material/brightness-4
name: Switch to light mode
Expand Down Expand Up @@ -166,6 +167,6 @@ markdown_extensions:
permalink: true

extra_javascript:
- "https://widget.kapa.ai/kapa-widget.bundle.js"
- "javascript/init_kapa_widget.js"
- "javascript/cookbooks.js"
- 'https://widget.kapa.ai/kapa-widget.bundle.js'
- 'javascript/init_kapa_widget.js'
- 'javascript/cookbooks.js'

0 comments on commit 8236922

Please sign in to comment.