1 Star 0 Fork 1

fan.gao/onnx-tensorrt

forked from wuyy13/onnx-tensorrt 
加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT

TensorRT backend for ONNX

Parses ONNX models for execution with TensorRT.

See also the TensorRT documentation.

Supported TensorRT Versions

Development on the Master branch is for the latest version of TensorRT (5.1)

For versions < 5.1, clone and build from the 5.0 branch

Supported Operators

Current supported ONNX operators are found in the operator support matrix.

Installation

Dependencies

Download the code

Clone the code from GitHub.

git clone --recursive https://github.com/onnx/onnx-tensorrt.git

Building

The TensorRT-ONNX executables and libraries are built with CMAKE. Note by default CMAKE will tell the CUDA compiler generate code for the latest SM version. If you are using a GPU with a lower SM version you can specify which SMs to build for by using the optional -DGPU_ARCHS flag. For example, if you have a GTX 1080, you can specify -DGPU_ARCHS="61" to generate CUDA code specifically for that card.

See here for finding what maximum compute capability your specific GPU supports.

mkdir build
cd build
cmake .. -DTENSORRT_ROOT=<tensorrt_install_dir>
OR
cmake .. -DTENSORRT_ROOT=<tensorrt_install_dir> -DGPU_ARCHS="61"
make -j8
sudo make install

Executable usage

ONNX models can be converted to serialized TensorRT engines using the onnx2trt executable:

onnx2trt my_model.onnx -o my_engine.trt

ONNX models can also be converted to human-readable text:

onnx2trt my_model.onnx -t my_model.onnx.txt

See more usage information by running:

onnx2trt -h

ONNX Python backend usage

The TensorRT backend for ONNX can be used in Python as follows:

import onnx
import onnx_tensorrt.backend as backend
import numpy as np

model = onnx.load("/path/to/model.onnx")
engine = backend.prepare(model, device='CUDA:1')
input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32)
output_data = engine.run(input_data)[0]
print(output_data)
print(output_data.shape)

C++ library usage

The model parser library, libnvonnxparser.so, has its C++ API declared in this header:

NvOnnxParser.h

TensorRT engines built using this parser must use the plugin factory provided in libnvonnxparser_runtime.so, which has its C++ API declared in this header:

NvOnnxParserRuntime.h

Python modules

Python bindings for the ONNX-TensorRT parser in TensorRT versions >= 5.0 are packaged in the shipped .whl files. Install them with

pip install <tensorrt_install_dir>/python/tensorrt-5.1.6.0-cp27-none-linux_x86_64.whl

For earlier versions of TensorRT, the Python wrappers are built using SWIG. Build the Python wrappers and modules by running:

python setup.py build
sudo python setup.py install

Docker image

Build the onnx_tensorrt Docker image by running:

cp /path/to/TensorRT-5.1.*.tar.gz .
docker build -t onnx_tensorrt .

Tests

After installation (or inside the Docker container), ONNX backend tests can be run as follows:

Real model tests only:

python onnx_backend_test.py OnnxBackendRealModelTest

All tests:

python onnx_backend_test.py

You can use -v flag to make output more verbose.

Pre-trained models

Pre-trained models in ONNX format can be found at the ONNX Model Zoo

MIT License Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. Copyright (c) 2018 Open Neural Network Exchange Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

https://github.com/onnx/onnx-tensorrt.git 展开 收起
MIT
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/giteegaofan/onnx-tensorrt.git
git@gitee.com:giteegaofan/onnx-tensorrt.git
giteegaofan
onnx-tensorrt
onnx-tensorrt
5.1

搜索帮助

344bd9b3 5694891 D2dac590 5694891