lzhangzz 640aa03538
Support Windows (#106)
* minor changes

* support windows

* fix GCC build

* fix lint

* reformat

* fix Windows build

* fix GCC build

* search backend ops for onnxruntime

* fix lint

* fix lint

* code clean-up

* code clean-up

* fix clang build

* fix trt support

* fix cmake for ncnn

* fix cmake for openvino

* fix SDK Python API

* handle ops for other backends (ncnn, trt)

* handle SDK Python API library location

* robustify linkage

* fix cuda

* minor fix for openvino & ncnn

* use CMAKE_CUDA_ARCHITECTURES if set

* fix cuda preprocessor

* fix misc

* fix pplnn & pplcv, drop support for pplcv<0.6.0

* robustify cmake

* update build.md (#2)

* build dynamic modules as module library & fix demo (partially)

* fix candidate path for mmdeploy_python

* move "enable CUDA" to cmake config for demo

* refine demo cmake

* add comment

* fix ubuntu build

* revert docs/en/build.md

* fix C API

* fix lint

* Windows build doc (#3)

* check in docs related to mmdeploy build on windows

* update build guide on windows platform

* update build guide on windows platform

* make path of thirdparty libraries consistent

* make path consistency

* correct build command for custom ops

* correct build command for sdk

* update sdk build instructions

* update doc

* correct build command

* fix lint

* correct build command and fix lint

Co-authored-by: lvhan <lvhan@pjlab.org>

* trailing whitespace (#4)

* minor fix

* fix sr sdk model

* fix type deduction

* fix cudaFree after driver shutting down

* update ppl.cv installation warning (#5)

* fix device allocator threshold & fix lint

* update doc (#6)

* update ppl.cv installation warning

* missing 'git clone'

Co-authored-by: chenxin <chenxin2@sensetime.com>
Co-authored-by: zhangli <zhangli@sensetime.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: lvhan <lvhan@pjlab.org>
2022-02-24 20:08:44 +08:00

54 lines
1.4 KiB
C++

// Copyright (c) OpenMMLab. All rights reserved.
#include "transform.h"
#include "core/registry.h"
#include "core/utils/formatter.h"
namespace mmdeploy {
TransformImpl::TransformImpl(const Value &args) {
if (args.contains("context")) {
args["context"]["device"].get_to(device_);
args["context"]["stream"].get_to(stream_);
} else {
throw_exception(eNotSupported);
}
}
std::vector<std::string> TransformImpl::GetImageFields(const Value &input) {
if (input.contains("img_fields")) {
if (input["img_fields"].is_string()) {
return {input["img_fields"].get<std::string>()};
} else if (input["img_fields"].is_array()) {
std::vector<std::string> img_fields;
for (auto &v : input["img_fields"]) {
img_fields.push_back(v.get<std::string>());
}
return img_fields;
}
} else {
return {"img"};
}
throw_exception(eInvalidArgument);
}
Transform::Transform(const Value &args) {
Device device{"cpu"};
if (args.contains("context")) {
device = args["context"].value("device", device);
}
Platform platform(device.platform_id());
specified_platform_ = platform.GetPlatformName();
if (!(specified_platform_ == "cpu")) {
// add cpu platform, so that a transform op can fall back to its cpu
// version if it hasn't implementation on the specific platform
candidate_platforms_.push_back("cpu");
}
}
MMDEPLOY_DEFINE_REGISTRY(Transform);
} // namespace mmdeploy