Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Realease #3

Merged
merged 3 commits into from
Nov 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
## Description of the goal of the PR


## Changes this PR introduces (fill it before implementation)

- [ ]

## Checklist before requesting a review
- [ ] The CI pipeline passes
- [ ] I have typed my code
- [ ] I have created / updated the docstrings
- [ ] I have updated the README, if relevant
- [ ] I have updated the requirements, if relevant
- [ ] I have tested my code
35 changes: 35 additions & 0 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
name: CI

on:
push:
branches:
- 'develop'
pull_request:
branches:
- '*'
workflow_call:

jobs:
CI:
name: Launching CI
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10']

steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}

- name: Install poetry
run: make download-poetry

- name: Install requirements
run: |
poetry install

- name: Run Pre commit hooks
run: make format-code
37 changes: 37 additions & 0 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# This workflow triggers the CI, updates the version, and uploads the release to GitHub and Google Cloud Storage when a push is made to either the 'main' or 'develop' branch.
#
# Workflow Steps:
#
# 1. Ci is triggered using the CI workflow defined in .github/workflows/ci.yaml
# 2. If it succeeds, the version is updated using Python Semantic Release
# 3. The release is uploaded to GitHub (same step and GitHub action)

name: CI and Release on main

on:
push:
branches:
- main

jobs:
CI:
uses: ./.github/workflows/ci.yaml

Release:
runs-on: ubuntu-latest
concurrency: Release
needs: CI
permissions:
id-token: write
contents: write

steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
token: ${{ secrets.GH_TOKEN }}

- name: Python Semantic Release
uses: python-semantic-release/python-semantic-release@master
with:
github_token: ${{ secrets.GH_TOKEN }}
132 changes: 132 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# Poetry
poetry.lock
42 changes: 42 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
repos:
- repo: "https://github.com/pre-commit/pre-commit-hooks"
rev: v4.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-toml
- id: check-yaml
- id: check-json
- id: check-added-large-files
- repo: local
hooks:
- id: black
name: Formatting (black)
entry: black
types: [python]
language: system
- id: isort
name: Sorting imports (isort)
entry: isort
types: [python]
language: system
- id: ruff
name: Linting (ruff)
entry: ruff
types: [python]
language: system
- id: nbstripout
name: Strip Jupyter notebook output (nbstripout)
entry: nbstripout
types: [file]
files: (.ipynb)$
language: system
- id: pytest-check
name: Tests (pytest)
stages: [push]
entry: pytest tests/
types: [python]
language: system
pass_filenames: false
always_run: true
exclude: ^(.svn|CVS|.bzr|.hg|.git|__pycache__|.tox|.ipynb_checkpoints|assets|tests/assets/|venv/|.venv/)
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<div align="center">
<h2>
ByteTrack-Pip: Packaged version of the ByteTrack repository
ByteTrack-Pip: Packaged version of the ByteTrack repository
</h2>
<h4>
<img width="700" alt="teaser" src="assets/demo.gif">
Expand All @@ -19,7 +19,7 @@ This repo is a packaged version of the [ByteTrack](https://github.com/ifzhang/By
pip install bytetracker
```

### Detection Model + ByteTrack
### Detection Model + ByteTrack
```python
from bytetracker import BYTETracker

Expand Down
36 changes: 21 additions & 15 deletions bytetracker/byte_tracker.py
Original file line number Diff line number Diff line change
@@ -1,22 +1,23 @@
import numpy as np
import torch

from bytetracker import matching
from bytetracker.basetrack import BaseTrack, TrackState
from bytetracker.kalman_filter import KalmanFilter


def xywh2xyxy(x):
# Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
y = np.copy(x)
y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
return y


def xyxy2xywh(x):
# Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
y = np.copy(x)
y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
y[:, 2] = x[:, 2] - x[:, 0] # width
Expand All @@ -28,7 +29,6 @@ class STrack(BaseTrack):
shared_kalman = KalmanFilter()

def __init__(self, tlwh, score, cls):

# wait activate
self._tlwh = np.asarray(tlwh, dtype=float)
self.kalman_filter = None
Expand All @@ -53,7 +53,9 @@ def multi_predict(stracks):
for i, st in enumerate(stracks):
if st.state != TrackState.Tracked:
multi_mean[i][7] = 0
multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance)
multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(
multi_mean, multi_covariance
)
for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):
stracks[i].mean = mean
stracks[i].covariance = cov
Expand Down Expand Up @@ -98,7 +100,9 @@ def update(self, new_track, frame_id):
self.cls = new_track.cls

new_tlwh = new_track.tlwh
self.mean, self.covariance = self.kalman_filter.update(self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh))
self.mean, self.covariance = self.kalman_filter.update(
self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh)
)
self.state = TrackState.Tracked
self.is_activated = True

Expand Down Expand Up @@ -188,10 +192,6 @@ def update(self, dets, _):
confs = dets[:, 4]
clss = dets[:, 5]

classes = clss.numpy()
xyxys = xyxys.numpy()
confs = confs.numpy()

remain_inds = confs > self.track_thresh
inds_low = confs > 0.1
inds_high = confs < self.track_thresh
Expand All @@ -204,8 +204,8 @@ def update(self, dets, _):
scores_keep = confs[remain_inds]
scores_second = confs[inds_second]

clss_keep = classes[remain_inds]
clss_second = classes[remain_inds]
clss_keep = clss[remain_inds]
clss_second = clss[remain_inds]

if len(dets) > 0:
"""Detections"""
Expand Down Expand Up @@ -245,10 +245,14 @@ def update(self, dets, _):
# association the untrack to the low score detections
if len(dets_second) > 0:
"""Detections"""
detections_second = [STrack(xywh, s, c) for (xywh, s, c) in zip(dets_second, scores_second, clss_second)]
detections_second = [
STrack(xywh, s, c) for (xywh, s, c) in zip(dets_second, scores_second, clss_second)
]
else:
detections_second = []
r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked]
r_tracked_stracks = [
strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked
]
dists = matching.iou_distance(r_tracked_stracks, detections_second)
matches, u_track, u_detection_second = matching.linear_assignment(dists, thresh=0.5)
for itracked, idet in matches:
Expand Down Expand Up @@ -303,7 +307,9 @@ def update(self, dets, _):
self.lost_stracks.extend(lost_stracks)
self.lost_stracks = sub_stracks(self.lost_stracks, self.removed_stracks)
self.removed_stracks.extend(removed_stracks)
self.tracked_stracks, self.lost_stracks = remove_duplicate_stracks(self.tracked_stracks, self.lost_stracks)
self.tracked_stracks, self.lost_stracks = remove_duplicate_stracks(
self.tracked_stracks, self.lost_stracks
)
# get scores of lost tracks
output_stracks = [track for track in self.tracked_stracks if track.is_activated]
outputs = []
Expand Down
Loading