Skip to content

Commit

Permalink
Merge branch 'main' into add-inference-gpu-windows-docs
Browse files Browse the repository at this point in the history
  • Loading branch information
sberan authored Aug 7, 2024
2 parents 0ed1c9a + 136c33c commit a070b73
Show file tree
Hide file tree
Showing 11 changed files with 435 additions and 168 deletions.
1 change: 1 addition & 0 deletions CODEOWNERS
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
* @PawelPeczek-Roboflow @grzegorz-roboflow @yeldarby @probicheaux @hansent
/docs/ @capjamesg
6 changes: 5 additions & 1 deletion docs/foundation/cogvlm.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,13 @@ We recommend using CogVLM paired with inference HTTP API adjusted to run in GPU
with our `inference-cli` tool. Run the following command to set up environment and run the API under
`http://localhost:9001`

!!! warning
Make sure that you are running this at machine with an NVidia GPU! Otherwise CogVLM will not be available.


```bash
pip install inference inference-cli inference-sdk
inference server start # make sure that you are running this at machine with GPU! Otherwise CogVLM will not be available
inference server start
```

Let's ask a question about the following image:
Expand Down
31 changes: 19 additions & 12 deletions docs/quickstart/run_a_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,24 @@ In the code above, we loaded a model and then we used that model's `infer(...)`
Running inference is fun but it's not much to look at. Let's add some code to visualize our results.

```python
from inference import get_model
from io import BytesIO

import requests
import supervision as sv
import cv2
from inference import get_model
from PIL import Image
from PIL.ImageFile import ImageFile

# define the image url to use for inference
image_file = "people-walking.jpg"
image = cv2.imread(image_file)

def load_image_from_url(url: str) -> ImageFile:
response = requests.get(url)
response.raise_for_status() # check if the request was successful
image = Image.open(BytesIO(response.content))
return image


# load the image from an url
image = load_image_from_url("https://media.roboflow.com/inference/people-walking.jpg")

# load a pre-trained yolov8n model
model = get_model(model_id="yolov8n-640")
Expand All @@ -59,21 +70,17 @@ results = model.infer(image)[0]
detections = sv.Detections.from_inference(results)

# create supervision annotators
bounding_box_annotator = sv.BoundingBoxAnnotator()
bounding_box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()

# annotate the image with our inference results
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
annotated_image = bounding_box_annotator.annotate(scene=image, detections=detections)
annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections)

# display the image
sv.plot_image(annotated_image)
```

The `people-walking.jpg` file is hosted <a href="https://media.roboflow.com/inference/people-walking.jpg" target="_blank">here</a>.

![People Walking Annotated](https://storage.googleapis.com/com-roboflow-marketing/inference/people-walking-annotated.jpg)

## Summary
Expand Down
2 changes: 1 addition & 1 deletion inference/core/version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.15.2"
__version__ = "0.15.4"


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion inference/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
CORE_MODEL_DOCTR_ENABLED,
CORE_MODEL_GAZE_ENABLED,
CORE_MODEL_GROUNDINGDINO_ENABLED,
CORE_MODEL_SAM_ENABLED,
CORE_MODEL_SAM2_ENABLED,
CORE_MODEL_SAM_ENABLED,
CORE_MODEL_YOLO_WORLD_ENABLED,
CORE_MODELS_ENABLED,
)
Expand Down
Loading

0 comments on commit a070b73

Please sign in to comment.