본문 바로가기

컴퓨터/머신러닝 (Machine Learning)

TensorRT Docker 사용 정리

Nvidia driver: 515.86

CUDA: 11.6

Docker: 20.10.21

 

1. Nvidia container toolkit 설치(Installation Guide — NVIDIA Cloud Native Technologies documentation)

1-1. package repository 추가

 

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
      && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
      && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
            sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
            sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

 

1-2. 설치

 

sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker

 

1-3. 설치확인

 

sudo docker run --rm --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

 

output으로 nvidia-smi 정보가 출력이 되어야 한다.

 

2. TensorRT container 확인 및 실행

 

2-1. 버전 확인 링크

Container Release Notes :: NVIDIA Deep Learning TensorRT Documentation

 

Container Release Notes :: NVIDIA Deep Learning TensorRT Documentation

The NVIDIA container image for TensorRT, release 22.02, is available on NGC. Contents of the TensorRT container This container includes the following: The TensorRT C++ samples and C++ API documentation. The samples can be built by running make in the /work

docs.nvidia.com

 

2-2. 실행 코드 정보 링크

Container Release Notes :: NVIDIA Deep Learning TensorRT Documentation

 

Container Release Notes :: NVIDIA Deep Learning TensorRT Documentation

About this task On a system with GPU support for NGC containers, when you run a container, the following occurs: The Docker engine loads the image into a container which runs the software. You define the runtime resources of the container by including the

docs.nvidia.com

 

뼈대

 

docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvidia/tensorrt:<xx.xx>-py<x>

 

2-3. 실제 코드 예제

 

sudo docker run --gpus all -it --network=host --ipc=host --shm-size 8G --rm -v /:/tensorrt_docker nvcr.io/nvidia/tensorrt:22.02-py3

 

2-4. TensorRT 테스트(Container 내부)

yolov5(TFLite, ONNX, CoreML, TensorRT Export · Issue #251 · ultralytics/yolov5 (github.com))
TensorRT 변환 예제를 테스트 해 보고자 pytorch 및 관련 패키지 설치 후 export.py 를 실행함

 

# upgrade pip
pip install --upgrade pip

# install pytorch
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

# cv2 구동 패키지 설치
apt update
apt-get install ffmpeg libsm6 libxext6  -y

# yolov5 git clone
git clone https://github.com/ultralytics/yolov5.git

 

Yolov5 폴더 내부에서

 

# yolov5 구동 패키지 설치
pip install -r requirement.txt

# pt 파일을 tensorrt 파일로 변환
python export.py --weights yolov5s.pt --include engine