The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.
We recommend Anaconda as Python package management system. Please refer to pytorch.org
for the detail of PyTorch (torch
) installation. The following is the corresponding torchvision
versions and
supported Python versions.
torch |
torchvision |
python |
---|---|---|
main / nightly
|
main / nightly
|
>=3.7 , <=3.10
|
1.12.0 |
0.13.0 |
>=3.7 , <=3.10
|
1.11.0 |
0.12.0 |
>=3.7 , <=3.10
|
1.10.2 |
0.11.3 |
>=3.6 , <=3.9
|
1.10.1 |
0.11.2 |
>=3.6 , <=3.9
|
1.10.0 |
0.11.1 |
>=3.6 , <=3.9
|
1.9.1 |
0.10.1 |
>=3.6 , <=3.9
|
1.9.0 |
0.10.0 |
>=3.6 , <=3.9
|
1.8.2 |
0.9.2 |
>=3.6 , <=3.9
|
1.8.1 |
0.9.1 |
>=3.6 , <=3.9
|
1.8.0 |
0.9.0 |
>=3.6 , <=3.9
|
1.7.1 |
0.8.2 |
>=3.6 , <=3.9
|
1.7.0 |
0.8.1 |
>=3.6 , <=3.8
|
1.7.0 |
0.8.0 |
>=3.6 , <=3.8
|
1.6.0 |
0.7.0 |
>=3.6 , <=3.8
|
1.5.1 |
0.6.1 |
>=3.5 , <=3.8
|
1.5.0 |
0.6.0 |
>=3.5 , <=3.8
|
1.4.0 |
0.5.0 |
==2.7 , >=3.5 , <=3.8
|
1.3.1 |
0.4.2 |
==2.7 , >=3.5 , <=3.7
|
1.3.0 |
0.4.1 |
==2.7 , >=3.5 , <=3.7
|
1.2.0 |
0.4.0 |
==2.7 , >=3.5 , <=3.7
|
1.1.0 |
0.3.0 |
==2.7 , >=3.5 , <=3.7
|
<=1.0.1 |
0.2.2 |
==2.7 , >=3.5 , <=3.7
|
Anaconda:
conda install torchvision -c pytorch
pip:
pip install torchvision
From source:
python setup.py install
# or, for OSX
# MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
In case building TorchVision from source fails, install the nightly version of PyTorch following the linked guide on the contributing page and retry the install.
By default, GPU support is built if CUDA is found and torch.cuda.is_available()
is true.
It's possible to force building GPU support by setting FORCE_CUDA=1
environment variable,
which is useful when building a docker image.
Torchvision currently supports the following image backends:
torchvision.set_image_backend('accimage')
conda install libpng
or any of the package managers for debian-based and RHEL-based Linux distributions.conda install jpeg
or any of the package managers for debian-based and RHEL-based Linux distributions. libjpeg-turbo can be used as well.Notes: libpng
and libjpeg
must be available at compilation time in order to be available. Make sure that it is available on the standard library locations,
otherwise, add the include and library paths in the environment variables TORCHVISION_INCLUDE
and TORCHVISION_LIBRARY
, respectively.
Torchvision currently supports the following video backends:
conda install -c conda-forge ffmpeg
python setup.py install
TorchVision provides an example project for how to use the models on C++ using JIT Script.
Installation From source:
mkdir build
cd build
# Add -DWITH_CUDA=on support for the CUDA if needed
cmake ..
make
make install
Once installed, the library can be accessed in cmake (after properly configuring CMAKE_PREFIX_PATH
) via the TorchVision::TorchVision
target:
find_package(TorchVision REQUIRED)
target_link_libraries(my-target PUBLIC TorchVision::TorchVision)
The TorchVision
package will also automatically look for the Torch
package and add it as a dependency to my-target
,
so make sure that it is also available to cmake via the CMAKE_PREFIX_PATH
.
For an example setup, take a look at examples/cpp/hello_world
.
Python linking is disabled by default when compiling TorchVision with CMake, this allows you to run models without any Python
dependency. In some special cases where TorchVision's operators are used from Python code, you may need to link to Python. This
can be done by passing -DUSE_PYTHON=on
to CMake.
In order to get the torchvision operators registered with torch (eg. for the JIT), all you need to do is to ensure that you
#include <torchvision/vision.h>
in your project.
You can find the API documentation on the pytorch website: https://pytorch.org/vision/stable/index.html
See the CONTRIBUTING file for how to help out.
This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.
If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!
The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
More specifically, SWAG models are released under the CC-BY-NC 4.0 license. See SWAG LICENSE for additional details.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。