mmdeploy/csrc/core/mat.h

93 lines
2.7 KiB
C
Raw Normal View History

Merge sdk (#251) * check in cmake * move backend_ops to csrc/backend_ops * check in preprocess, model, some codebase and their c-apis * check in CMakeLists.txt * check in parts of test_csrc * commit everything else * add readme * update core's BUILD_INTERFACE directory * skip codespell on third_party * update trt_net and ort_net's CMakeLists * ignore clion's build directory * check in pybind11 * add onnx.proto. Remove MMDeploy's dependency on ncnn's source code * export MMDeployTargets only when MMDEPLOY_BUILD_SDK is ON * remove useless message * target include directory is wrong * change target name from mmdeploy_ppl_net to mmdeploy_pplnn_net * skip install directory * update project's cmake * remove useless code * set CMAKE_BUILD_TYPE to Release by force if it isn't set by user * update custom ops CMakeLists * pass object target's source lists * fix lint end-of-file * fix lint: trailing whitespace * fix codespell hook * remove bicubic_interpolate to csrc/backend_ops/ * set MMDEPLOY_BUILD_SDK OFF * change custom ops build command * add spdlog installation command * update docs on how to checkout pybind11 * move bicubic_interpolate to backend_ops/tensorrt directory * remove useless code * correct cmake * fix typo * fix typo * fix install directory * correct sdk's readme * set cub dir when cuda version < 11.0 * change directory where clang-format will apply to * fix build command * add .clang-format * change clang-format style from google to file * reformat csrc/backend_ops * format sdk's code * turn off clang-format for some files * add -Xcompiler=-fno-gnu-unique * fix trt topk initialize * check in config for sdk demo * update cmake script and csrc's readme * correct config's path * add cuda include directory, otherwise compile failed in case of tensorrt8.2 * clang-format onnx2ncnn.cpp Co-authored-by: zhangli <lzhang329@gmail.com> Co-authored-by: grimoire <yaoqian@sensetime.com>
2021-12-07 10:57:55 +08:00
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef CORE_MAT_H
#define CORE_MAT_H
#include <memory>
#include <vector>
#include "core/device.h"
#include "core/types.h"
namespace mmdeploy {
Support Windows (#106) * minor changes * support windows * fix GCC build * fix lint * reformat * fix Windows build * fix GCC build * search backend ops for onnxruntime * fix lint * fix lint * code clean-up * code clean-up * fix clang build * fix trt support * fix cmake for ncnn * fix cmake for openvino * fix SDK Python API * handle ops for other backends (ncnn, trt) * handle SDK Python API library location * robustify linkage * fix cuda * minor fix for openvino & ncnn * use CMAKE_CUDA_ARCHITECTURES if set * fix cuda preprocessor * fix misc * fix pplnn & pplcv, drop support for pplcv<0.6.0 * robustify cmake * update build.md (#2) * build dynamic modules as module library & fix demo (partially) * fix candidate path for mmdeploy_python * move "enable CUDA" to cmake config for demo * refine demo cmake * add comment * fix ubuntu build * revert docs/en/build.md * fix C API * fix lint * Windows build doc (#3) * check in docs related to mmdeploy build on windows * update build guide on windows platform * update build guide on windows platform * make path of thirdparty libraries consistent * make path consistency * correct build command for custom ops * correct build command for sdk * update sdk build instructions * update doc * correct build command * fix lint * correct build command and fix lint Co-authored-by: lvhan <lvhan@pjlab.org> * trailing whitespace (#4) * minor fix * fix sr sdk model * fix type deduction * fix cudaFree after driver shutting down * update ppl.cv installation warning (#5) * fix device allocator threshold & fix lint * update doc (#6) * update ppl.cv installation warning * missing 'git clone' Co-authored-by: chenxin <chenxin2@sensetime.com> Co-authored-by: zhangli <zhangli@sensetime.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: lvhan <lvhan@pjlab.org>
2022-02-24 20:08:44 +08:00
class MMDEPLOY_API Mat final {
Merge sdk (#251) * check in cmake * move backend_ops to csrc/backend_ops * check in preprocess, model, some codebase and their c-apis * check in CMakeLists.txt * check in parts of test_csrc * commit everything else * add readme * update core's BUILD_INTERFACE directory * skip codespell on third_party * update trt_net and ort_net's CMakeLists * ignore clion's build directory * check in pybind11 * add onnx.proto. Remove MMDeploy's dependency on ncnn's source code * export MMDeployTargets only when MMDEPLOY_BUILD_SDK is ON * remove useless message * target include directory is wrong * change target name from mmdeploy_ppl_net to mmdeploy_pplnn_net * skip install directory * update project's cmake * remove useless code * set CMAKE_BUILD_TYPE to Release by force if it isn't set by user * update custom ops CMakeLists * pass object target's source lists * fix lint end-of-file * fix lint: trailing whitespace * fix codespell hook * remove bicubic_interpolate to csrc/backend_ops/ * set MMDEPLOY_BUILD_SDK OFF * change custom ops build command * add spdlog installation command * update docs on how to checkout pybind11 * move bicubic_interpolate to backend_ops/tensorrt directory * remove useless code * correct cmake * fix typo * fix typo * fix install directory * correct sdk's readme * set cub dir when cuda version < 11.0 * change directory where clang-format will apply to * fix build command * add .clang-format * change clang-format style from google to file * reformat csrc/backend_ops * format sdk's code * turn off clang-format for some files * add -Xcompiler=-fno-gnu-unique * fix trt topk initialize * check in config for sdk demo * update cmake script and csrc's readme * correct config's path * add cuda include directory, otherwise compile failed in case of tensorrt8.2 * clang-format onnx2ncnn.cpp Co-authored-by: zhangli <lzhang329@gmail.com> Co-authored-by: grimoire <yaoqian@sensetime.com>
2021-12-07 10:57:55 +08:00
public:
Mat() = default;
/**
* @brief construct a Mat for an image
* @param h height of an image
* @param w width of an image
* @param format pixel format of an image, rgb, bgr, gray etc. Note that in
* case of nv12 or nv21, height is the real height of an image,
* not height * 3 / 2
* @param type data type of an pixel in each channel
* @param device location Mat's buffer stores
*/
Mat(int h, int w, PixelFormat format, DataType type, Device device = Device{0},
Allocator allocator = {});
/**@brief construct a Mat for an image using custom data
* @example
* ``` c++
* cv::Mat image = imread("test.jpg");
* std::shared_ptr<void> data(image.data, [image=image](void* p){});
* mmdeploy::Mat mat(image.rows, image.cols, kBGR, kINT8, data);
* ```
* @param h height of an image
* @param w width of an image
* @param format pixel format of an image, rgb, bgr, gray etc. Note that in
* case of nv12 or nv21, height is the real height of an image,
* not height * 3 / 2
* @param type data type of an pixel in each channel
* @param data custom data
* @param device location where `data` is on
*/
Mat(int h, int w, PixelFormat format, DataType type, std::shared_ptr<void> data,
Device device = Device{0});
/**
* @brief construct a Mat for an image using custom data
* @param h height of an image
* @param w width of an image
* @param format pixel format of an image, rgb, bgr, gray etc. Note that in
* case of nv12 or nv21, height is the real height of an image,
* not height * 3 / 2
* @param type data type of an pixel in each channel
* @param data custom data
* @param device location where `data` is on
*/
Mat(int h, int w, PixelFormat format, DataType type, void* data, Device device = Device{0});
Device device() const;
Buffer& buffer();
const Buffer& buffer() const;
PixelFormat pixel_format() const { return format_; }
DataType type() const { return type_; }
int height() const { return height_; }
int width() const { return width_; }
int channel() const { return channel_; }
int size() const { return size_; }
int byte_size() const { return bytes_; }
template <typename T>
T* data() const {
return reinterpret_cast<T*>(buf_.GetNative());
}
private:
Buffer buf_;
PixelFormat format_{PixelFormat::kGRAYSCALE};
DataType type_{DataType::kINT8};
int width_{0};
int height_{0};
int channel_{0};
int size_{0}; // size of elements in mat
int bytes_{0};
};
} // namespace mmdeploy
#endif // !CORE_MAT_H