lzhangzz 640aa03538
Support Windows (#106)
* minor changes

* support windows

* fix GCC build

* fix lint

* reformat

* fix Windows build

* fix GCC build

* search backend ops for onnxruntime

* fix lint

* fix lint

* code clean-up

* code clean-up

* fix clang build

* fix trt support

* fix cmake for ncnn

* fix cmake for openvino

* fix SDK Python API

* handle ops for other backends (ncnn, trt)

* handle SDK Python API library location

* robustify linkage

* fix cuda

* minor fix for openvino & ncnn

* use CMAKE_CUDA_ARCHITECTURES if set

* fix cuda preprocessor

* fix misc

* fix pplnn & pplcv, drop support for pplcv<0.6.0

* robustify cmake

* update build.md (#2)

* build dynamic modules as module library & fix demo (partially)

* fix candidate path for mmdeploy_python

* move "enable CUDA" to cmake config for demo

* refine demo cmake

* add comment

* fix ubuntu build

* revert docs/en/build.md

* fix C API

* fix lint

* Windows build doc (#3)

* check in docs related to mmdeploy build on windows

* update build guide on windows platform

* update build guide on windows platform

* make path of thirdparty libraries consistent

* make path consistency

* correct build command for custom ops

* correct build command for sdk

* update sdk build instructions

* update doc

* correct build command

* fix lint

* correct build command and fix lint

Co-authored-by: lvhan <lvhan@pjlab.org>

* trailing whitespace (#4)

* minor fix

* fix sr sdk model

* fix type deduction

* fix cudaFree after driver shutting down

* update ppl.cv installation warning (#5)

* fix device allocator threshold & fix lint

* update doc (#6)

* update ppl.cv installation warning

* missing 'git clone'

Co-authored-by: chenxin <chenxin2@sensetime.com>
Co-authored-by: zhangli <zhangli@sensetime.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: lvhan <lvhan@pjlab.org>
2022-02-24 20:08:44 +08:00

87 lines
2.3 KiB
C++

// Copyright (c) OpenMMLab. All rights reserved.
#include "load.h"
#include "archive/json_archive.h"
namespace mmdeploy {
PrepareImageImpl::PrepareImageImpl(const Value& args) : TransformImpl(args) {
arg_.to_float32 = args.value("to_float32", false);
arg_.color_type = args.value("color_type", std::string("color"));
}
/**
* Input:
{
"ori_img": cv::Mat,
"attribute": {
}
}
* Output:
{
"ori_img": cv::Mat,
"img": Tensor,
"img_shape": [],
"ori_shape": [],
"img_fields": ["img"],
"attribute": {
}
}
*/
Result<Value> PrepareImageImpl::Process(const Value& input) {
MMDEPLOY_DEBUG("input: {}", to_json(input).dump(2));
assert(input.contains("ori_img"));
// copy input data, and update its properties later
Value output = input;
Mat src_mat = input["ori_img"].get<Mat>();
auto res = (arg_.color_type == "color" || arg_.color_type == "color_ignore_orientation"
? ConvertToBGR(src_mat)
: ConvertToGray(src_mat));
OUTCOME_TRY(auto tensor, std::move(res));
output["img"] = tensor;
for (auto v : tensor.desc().shape) {
output["img_shape"].push_back(v);
}
output["ori_shape"] = {1, src_mat.height(), src_mat.width(), src_mat.channel()};
output["img_fields"].push_back("img");
MMDEPLOY_DEBUG("output: {}", to_json(output).dump(2));
return output;
}
PrepareImage::PrepareImage(const Value& args, int version) : Transform(args) {
auto impl_creator = Registry<PrepareImageImpl>::Get().GetCreator(specified_platform_, version);
if (nullptr == impl_creator) {
MMDEPLOY_ERROR("'PrepareImage' is not supported on '{}' platform", specified_platform_);
throw std::domain_error("'PrepareImage' is not supported on specified platform");
}
impl_ = impl_creator->Create(args);
}
class PrepareImageCreator : public Creator<Transform> {
public:
PrepareImageCreator() = default;
~PrepareImageCreator() = default;
const char* GetName() const override { return "LoadImageFromFile"; }
int GetVersion() const override { return version_; }
std::unique_ptr<Transform> Create(const Value& value) override {
return std::make_unique<PrepareImage>(value, version_);
}
private:
int version_{1};
};
REGISTER_MODULE(Transform, PrepareImageCreator);
MMDEPLOY_DEFINE_REGISTRY(PrepareImageImpl);
} // namespace mmdeploy