Releases: open-mmlab/mmaction2
MMAction2 V1.2.0 Release
Highlights
- Support the Training of ActionClip
- Support VindLU multi-modality algorithm
- Support MobileOne TSN/TSM
New Features
- Support the Training of ActionClip (2620)
- Support video retrieval dataset MSVD (2622)
- Support VindLU multi-modality algorithm (2667)
- Support Dense Regression Network for Video Grounding (2668)
Improvements
- Support Video Demos (2602)
- Support Audio Demos (2603)
- Add README_zh-CN.md for Swin and VideoMAE (2621)
- Support MobileOne TSN/TSM (2656)
- Support SlowOnly K700 feature to train localization models (2673)
Bug Fixes
MMAction2 V1.1.0 Release
New Direction: Multi-Modal Video Understanding
We support two novel models for video recognition and retrieval based on open-domain text: ActionCLIP and CLIP4Clip. These models mark the first step of MMAction2's journey towards multi-modal video understanding. Furthermore, we also introduce a new video retrieval dataset, MSR-VTT.
For more details, please refer to ActionCLIP, CLIP4Clip and MSR-VTT.
Supported by @Dai-Wenxun in #2470 and #2489.
New Config Type
MMEngine introduced the pure Python style configuration file:
- Support navigating to base configuration file in IDE
- Support navigating to base variable in IDE
- Support navigating to source code of class in IDE
- Support inheriting two configuration files containing the same field
- Load the configuration file without other third-party requirements
Refer to the tutorial for more detailed usages.
New Datasets
We are glad to support 3 new datasets:
- (ICCV2019) HACS
- (ICCV2021) MultiSports
- (Arxiv2022) Kinetics-710
(ICCV2019) HACS
HACS is a new large-scale dataset for recognition and temporal localization of human actions collected from Web videos.
v_-3jHv_c1LKU.mp4
For more details, please refer to HACS.
(ICCV2021) MultiSports
MultiSports is a multi-person video dataset of spatio-temporally localized sports actions.
ICCV_2021._MultiSports_._.mp4
For more details, please refer to MultiSports.
(Arxiv2022) Kinetics-710
For more details, please refer to Kinetics710.
Other New Features
- Support rich projects: Gesture Recognition, Spatio-Temporal Action Detection Tutorial, and Knowledge Distillation
- Support TCANet(CVPR'2021)
- Support VideoMAE V2(CVPR'2023) and VideoMAE(NeurIPS'2022) on action detection
What's Changed
- [Doc] Fix document links in readme by @cir7 in #2358
- [doc] fix installation doc by @cir7 in #2362
- [Enhance] Support automatically assigning issues by @cir7 in #2368
- [Doc] Fix model links in README by @cir7 in #2372
- [Fix] Restore the wrongly modified config by @cir7 in #2375
- [Doc] Fix readme links by @cir7 in #2376
- [Fix] update skeleton demo by @WILLOSCAR in #2381
- [Fix] Fix a bug in
demo_skeleton.py
by @Dai-Wenxun in #2380 - [Update] Update version requirements by @Dai-Wenxun in #2383
- [Doc] update readme by @cir7 in #2382
- [Doc] Update Installation Related Doc by @Dai-Wenxun in #2379
- [Fix] Fix colab tutorial by @cir7 in #2384
- [Fix] update colab link in tutorial by @cir7 in #2391
- [Doc] Refine Docs by @Dai-Wenxun in #2404
- [CI] fix github ci (main) by @cir7 in #2421
- [Fix] fix a bug in multi-label classification by @Dai-Wenxun in #2425
- [Fix] Fix issue template by @cir7 in #2399
- [Doc] Update repo list by @cir7 in #2429
- [Fix] Fix a warning caused by
torch.div
by @Dai-Wenxun in #2449 - [Fix] Fix readthedoc error raised by incompatible OpenSSL version by @cir7 in #2455
- [Fix] Fix incompatibility of ImgAug and latest Numpy by @cir7 in #2451
- [Fix] Update branch in dockerfile by @cir7 in #2397
- [Doc] Update outdated config in readme by @cir7 in #2419
- [Fix] Fix tutorial by @cir7 in #2475
- [fix] Fix batch blending bug when use multi-label classification by @cir7 in #2466
- [Fix] Fix UniFormer README and metafile by @cir7 in #2450
- [Doc] update faq by @cir7 in #2476
- [Fix] Fix a bug of MViT when set with_cls_token to False by @KeepLost in #2480
- [Fix] Update outdated dependencies of mmcv for downloading fine-gym dataset by @yhZhai in #2495
- [Doc] add finetune doc by @cir7 in #2453
- [Doc] Update faq doc by @cir7 in #2482
- [Doc] Fix document link by @cir7 in #2457
- Merge dev-1.x to main by @cir7 in #2551
New Contributors
- @WILLOSCAR made their first contribution in #2381
- @KeepLost made their first contribution in #2480
- @yhZhai made their first contribution in #2495
Full Changelog: v1.0.0...v1.1.0
MMAction2 V1.0.0 Release
Highlights
We are excited to announce the release of MMAction2 1.0.0 as a part of the OpenMMLab 2.0 project! MMAction2 1.0.0 introduces an updated framework structure for the core package and a new section called Projects
. This section showcases various engaging and versatile applications built upon the MMAction2 foundation.
In this latest release, we have significantly refactored the core package's code to make it clearer, more comprehensible, and disentangled. This has resulted in improved performance for several existing algorithms, ensuring that they now outperform their previous versions. Additionally, we have incorporated some cutting-edge algorithms, such as VideoSwin and VideoMAE, to further enhance the capabilities of MMAction2 and provide users with a more comprehensive and powerful toolkit. The new Projects
section serves as an essential addition to MMAction2, created to foster innovation and collaboration among users. This section offers the following attractive features:
Flexible code contribution
: Unlike the core package, theProjects
section allows for a more flexible environment for code contributions, enabling faster integration of state-of-the-art models and features.Showcase of diverse applications
: Explore various projects built upon the MMAction2 foundation, such as deployment examples and combinations of video recognition with other tasks.Fostering creativity and collaboration
: Encourages users to experiment, build upon the MMAction2 platform, and share their innovative applications and techniques, creating an active community of developers and researchers. Discover the possibilities within the "Projects" section and join the vibrant MMAction2 community in pushing the boundaries of video understanding applications!
Exciting Features
RGBPoseConv3D
RGBPoseConv3D is a framework that jointly uses 2D human skeletons and RGB appearance for human action recognition. It is a 3D CNN with two streams, with the architecture borrowed from SlowFast. In RGBPoseConv3D:
- The RGB stream corresponds to the
slow
stream in SlowFast; The Skeleton stream corresponds to thefast
stream in SlowFast. - The input resolution of RGB frames is
4x
larger than the pseudo heatmaps. - Bilateral connections are used for early feature fusion between the two modalities.
- Supported by @Dai-Wenxun in #2182
Inferencer
In this release, we introduce the MMAction2Inferencer, which is a versatile API for the inference that supports multiple input types. The API enables users to easily specify and customize action recognition models, streamlining the process of performing video prediction using MMAction2.
Usage:
python demo/demo_inferencer.py ${INPUTS} [OPTIONS]
- The
INPUTS
can be a video path or rawframes folder. For more detailed information onOPTIONS
, please refer to Inferencer.
Example:
python demo/demo_inferencer.py zelda.mp4 --rec tsn --vid-out-dir zelda_out --label-file tools/data/kinetics/label_map_k400.txt
You can find the zelda.mp4
here. The output video is displayed below:
Clipchamp.mp4
List of Novel Features
MMAction2 V1.0 introduces support for new models and datasets in the field of video understanding, including MSG3D [Project] (CVPR'2020), CTRGCN [Project] (CVPR'2021), STGCN++ (Arxiv'2022), Video Swin Transformer (CVPR'2022), VideoMAE (NeurIPS'2022), C2D (CVPR'2018), MViT V2 (CVPR'2022), UniFormer V1 (ICLR'2022), and UniFormer V2 (Arxiv'2022), as well as the spatiotemporal action detection dataset AVA-Kinetics (Arxiv'2022).
- Enhanced Omni-Source: We enhanced the original omni-source technique by dynamically adjusting 3D convolutional network architecture to simultaneously utilize videos and images for training. Taking the
SlowOnlyR50 8x8
as an example, the Top-1 accuracy comparison of the three training methods illustrates that our omni-source training effectively employs the additionalImageNet
dataset, significantly boosting performance onKinetics400
.
- Mulit-Stream Skeleton Pipeline: In light of MMAction2's prior support for only
joint
andbone
modalities, we have extended support tojoint motion
andbone motion
modalities in MMAction2 V1.0. Furthermore, we have conducted training and evaluation for these four modalities using NTU60 2D and 3D keypoint data on STGCN, 2s-AGCN, and STGCN++.
- Repeat Augment was initially proposed as a data augmentation method for
ImageNet
training and has been employed in recent Video Transformer works. Whenever a video is read during training, we use multiple (typically 2-4) random samples from the video for training. This approach not only enhances the model's generalization capability but also reduces the IO pressure of video reading. We support Repeat Augment in MMAction2 V1.0 and utilize this technique in MViT V2 training. The table below compares the Top-1 accuracy onKinetics400
before and after employing Repeat Augment:
Bug Fixes
- [Fix] Fix flip config of TSM for sth2sth v1/v2 dataset by @cir7 in #2247
- [Fix] Fix circle ci by @cir7 in #2336 and #2334
- [Fix] Fix accepting an unexpected argument local-rank in PyTorch 2.0 by @cir7 in #2320
- [Fix] Fix TSM config link by @zyx-cv in #2315
- [Fix] Fix numpy version requirement in CI by @hukkai in #2284
- [Fix] Fix NTU pose extraction script by @cir7 in #2246
- [Fix] Fix TSM-MobileNet V2 by @cir7 in #2332
- [Fix] Fix command bugs in localization tasks' README by @hukkai in #2244
- [Fix] Fix duplicate name in DecordInit and SampleAVAFrame by @cir7 in #2251
- [Fix] Fix channel order when showing video by @cir7 in #2308
- [Fix] Specify map_location to cpu when using _load_checkpoint by @Zheng-LinXiao in #2252
New Contributors
- @Andy1621 made their first contribution in #2153
- @zoe08 made their first contribution in #2188
- @vansin made their first contribution in #2228
- @Zheng-LinXiao made their first contribution in #2252
Full Changelog: v0.24.0...v1.0.0
MMAction2 V1.0.0rc3 Release
Highlights
- Support Action Recognition model UniFormer V1(ICLR'2022), UniFormer V2(Arxiv'2022).
- Support training MViT V2(CVPR'2022), and MaskFeat(CVPR'2022) fine-tuning.
New Features
- Support UniFormer V1/V2 (#2153)
- Support training MViT, and MaskFeat fine-tuning (#2186)
- Support a unified inference interface: Inferencer (#2164)
Improvements
- Support load data list from multi-backends (#2176)
Bug Fixes
Documentation
MMAction2 V1.0.0rc2 Release
Highlights
- Support Action Recognition model VideoMAE(NeurIPS'2022), MVit V2(CVPR'2022), C2D and skeleton-based action recognition model STGCN++
- Support Omni-Source training on ImageNet and Kinetics datasets
- Support exporting spatial-temporal detection models to ONNX
New Features
- Support VideoMAE (#1942)
- Support MViT V2 (#2007)
- Supoort C2D (#2022)
- Support AVA-Kinetics dataset (#2080)
- Support STGCN++ (#2156)
- Support exporting spatial-temporal detection models to ONNX (#2148)
- Support Omni-Source training on ImageNet and Kinetics datasets (#2143)
Improvements
- Support repeat batch data augmentation (#2170)
- Support calculating FLOPs tool powered by fvcore (#1997)
- Support Spatial-temporal detection demo (#2019)
- Add SyncBufferHook and add randomness config in train.py (#2044)
- Refactor gradcam (#2049)
- Support init_cfg in Swin and ViTMAE (#2055)
- Refactor STGCN and related pipelines (#2087)
- Refactor visualization tools (#2092)
- Update
SampleFrames
transform and improve most models' performance (#1942) - Support real-time webcam demo (#2152)
- Refactor and enhance 2s-AGCN (#2130)
- Support adjusting fps in
SampleFrame
(#2157)
Bug Fixes
- Fix CI upstream library dependency (#2000)
- Fix SlowOnly readme typos and results (#2006)
- Fix VideoSwin readme (#2010)
- Fix tools and mim error (#2028)
- Fix Imgaug wrapper (#2024)
- Remove useless scripts (#2032)
- Fix multi-view inference (#2045)
- Update mmcv maximum version to 1.8.0 (#2047)
- Fix torchserver dependency (#2053)
- Fix
gen_ntu_rgbd_raw
script (#2076) - Update AVA-Kinetics experiment configs and results (#2099)
- Add
joint.pkl
andbone.pkl
used in multi-stream fusion tool (#2106) - Fix lint CI config (#2110)
- Update testing accuracy for modified
SampleFrames
(#2117), (#2121), (#2122), (#2124), (#2125), (#2126), (#2129), (#2128) - Fix timm related bug (#1976)
- Fix
check_videos.py
script (#2134) - Update CI maximum torch version to 1.13.0 (#2118)
Documentation
- Add MMYOLO description in README (#2011)
- Add v1.x introduction in README (#2023)
- Fix link in README (#2035)
- Refine some docs (#2038), (#2040), (#2058)
- Update TSN/TSM Readme (#2082)
- Add chinese document (#2083)
- Adjust docment structure (#2088)
- Fix Sth-Sth and Jester dataset links (#2103)
- Fix doc link (#2131)
MMAction2 V1.0.0rc1 Release
Highlights
- Support Video Swin Transformer
New Features
- Support Video Swin Transformer (#1939)
Improvements
Bug Fixes
- Fix link in doc (#1986, #1967, #1951, #1926,#1944, #1944, #1927, #1925)
- Fix CI (#1987, #1930, #1923)
- Fix pre-commit hook config (#1971)
- Fix TIN config (#1912)
- Fix UT for BMN and BSN (#1966)
- Fix UT for Recognizer2D (#1937)
- Fix BSN and BMN configs for localization (#1913)
- Modeify ST-GCN configs (#1913)
- Fix typo in migration doc (#1931)
- Remove Onnx related tools (#1928)
- Update TANet readme (#1916, #1890)
- Update 2S-AGCN readme (#1915)
- Fix TSN configs (#1905)
- Fix configs for detection (#1903)
- Fix typo in TIN config (#1904)
- Fix PoseC3D readme (#1899)
- Fix ST-GCN configs (#1891)
- Fix audio recognition readme (#1898)
- Fix TSM readme (#1887)
- Fix SlowOnly readme (#1889)
- Fix TRN readme (#1888)
- Fix typo in get_started doc (#1895)
MMAction2 V1.0.0rc0 Release
We are excited to announce the release of MMAction2 v1.0.0rc0.
MMAction2 1.0.0beta is the first version of MMAction2 1.x, a part of the OpenMMLab 2.0 projects.
Built upon the new training engine.
Highlights
-
New engines. MMAction2 1.x is based on MMEngine](https://github.com/open-mmlab/mmengine), which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.
-
Unified interfaces. As a part of the OpenMMLab 2.0 projects, MMAction2 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.
-
More documentation and tutorials. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it here.
Breaking Changes
In this release, we made lots of major refactoring and modifications. Please refer to the migration guide for details and migration instructions.
MMAction2 V0.24.1 Release
This release is meant to fix the compatibility with the latest mmcv v1.6.1
MMAction2 V0.24.0 Release
MMAction2 V0.23.0 Release
Highlights
- Support different seeds
- Provide multi-node training & testing script
- Update error log
New Features
- Support different seeds(#1502)
- Provide multi-node training & testing script(#1521)
- Update error log(#1546)
Documentations
- Update gpus in Slowfast readme(#1497)
- Fix work_dir in multigrid config(#1498)
- Add sub bn docs(#1503)
- Add shortcycle sampler docs(#1513)
- Update Windows Declaration(#1520)
- Update the link for ST-GCN(#1544)
- Update install commands(#1549)
Bug and Typo Fixes