Paddle framework IPU version supports paddle native inference library (Paddle Inference), which is suitable for cloud inference.
C++ prediction example
first step: Compile C++ prediction library from source code
The current Paddle IPU version only supports the C++ prediction library provided through source code compilation. For compilation environment preparation, please refer to the Paddle Framework IPU version installation instructions.
# 下载源码
git clone https://github.com/PaddlePaddle/Paddle.git
cd Paddle
# 创建编译目录
mkdir build && cd build
# 执行 CMake,注意这里需打开预测优化选项 ON_INFER
cmake .. -DWITH_IPU=ON -DWITH_PYTHON=ON -DPY_VERSION=3.7 -DWITH_MKL=ON -DON_INFER=ON \
-DPOPLAR_DIR=/opt/poplar -DPOPART_DIR=/opt/popart -DCMAKE_BUILD_TYPE=Release
# 开始编译
make -j$(nproc)
After successful compilation, the C++ prediction library will be stored in build/paddle_inference_install_dir
Under contents.
second step: Obtain the prediction sample code and compile and run it
# 获取示例代码
git clone https://github.com/PaddlePaddle/Paddle-Inference-Demo
Copy and rename the obtained C++ prediction library to Paddle-Inference-Demo/c++/lib/paddle_inference
.
cd Paddle-Inference-Demo/c++/paddle-ipu
# 编译
bash ./compile.sh
# 运行
bash ./run.sh
#Paddle #Paddle #Framework #IPU #Edition #Prediction #ExampleDocumentationPaddlePaddle #Deep #Learning #Platform