add
parent
a72fe4d10e
commit
ca27f37dd8
120
README.md
120
README.md
|
@ -1 +1,119 @@
|
|||
# k-reciprocal-re-ranking
|
||||
<<<<<<< HEAD
|
||||
# Re-ranking Person Re-identification with k-reciprocal Encoding
|
||||
================================================================
|
||||
|
||||
### This code has the IDE baseline for the Market-1501 and CUHK03 new training/testing protocol.
|
||||
|
||||
### The re-ranking code is available upon request.
|
||||
|
||||
If you find this code useful in your research, please consider citing:
|
||||
|
||||
@article{zhong2017re,
|
||||
title={Re-ranking Person Re-identification with k-reciprocal Encoding},
|
||||
author={Zhong, Zhun and Zheng, Liang and Cao, Donglin and Li, Shaozi},
|
||||
booktitle={CVPR},
|
||||
year={2017}
|
||||
}
|
||||
|
||||
|
||||
### Requirements: Caffe
|
||||
|
||||
Requirements for `Caffe` and `matcaffe` (see: [Caffe installation instructions](http://caffe.berkeleyvision.org/installation.html))
|
||||
|
||||
### Installation
|
||||
1. Build Caffe and matcaffe
|
||||
```Shell
|
||||
cd $Re-ranking_ROOT/caffe
|
||||
# Now follow the Caffe installation instructions here:
|
||||
# http://caffe.berkeleyvision.org/installation.html
|
||||
make -j8 && make matcaffe
|
||||
```
|
||||
|
||||
2. Download pre-computed imagenet models, Market-1501 dataset and CUHK03 dataset
|
||||
```Shell
|
||||
Please download the pre-trained imagenet models and put it in the "data/imagenet_models" folder.
|
||||
Please download Market-1501 dataset and unzip it in the "evaluation/data/Market-1501" folder.
|
||||
Please download CUHK03 dataset and unzip it in the "evaluation/data/CUHK03" folder.
|
||||
```
|
||||
|
||||
- [Pre-trained imagenet models](https://pan.baidu.com/s/1o7YZT8Y)
|
||||
|
||||
- [Market-1501](https://pan.baidu.com/s/1ntIi2Op)
|
||||
|
||||
- CUHK03 [[Baiduyun]](https://pan.baidu.com/s/1o8txURK) [[Google drive]](https://drive.google.com/open?id=0B7TOZKXmIjU3OUhfd3BPaVRHZVE)
|
||||
|
||||
### The new training/testing protocol for CUHK03
|
||||
The new training/testing protocol split for CUHK03 in our paper is in the "evaluation/data/CUHK03/" folder.
|
||||
- cuhk03_new_protocol_config_detected.mat
|
||||
- cuhk03_new_protocol_config_labeled.mat
|
||||
|
||||
### Training and testing IDE model
|
||||
|
||||
1. Training
|
||||
```Shell
|
||||
cd $Re-ranking_ROOT
|
||||
# train IDE ResNet_50 for Market-1501
|
||||
./experiments/Market-1501/train_IDE_ResNet_50.sh
|
||||
|
||||
# train IDE ResNet_50 for CUHK03
|
||||
./experiments/CUHK03/train_IDE_ResNet_50_labeled.sh
|
||||
./experiments/CUHK03/train_IDE_ResNet_50_detected.sh
|
||||
```
|
||||
2. Feature Extraction
|
||||
```Shell
|
||||
cd $Re-ranking_ROOT/evaluation
|
||||
# extract feature for Market-1501
|
||||
matlab Market_1501_extract_feature.m
|
||||
|
||||
# extract feature for CUHK03
|
||||
matlab CUHK03_extract_feature.m
|
||||
```
|
||||
|
||||
3. Evaluation with our re-ranking method
|
||||
```Shell
|
||||
# evaluation for Market-1501
|
||||
matlab Market_1501_evaluation.m
|
||||
|
||||
# evaluation for CUHK03
|
||||
matlab CUHK03_evaluation.m
|
||||
```
|
||||
|
||||
### Results
|
||||
You can download our pre-trained IDE models and IDE features, and put them in the "out_put" and "evaluation/feat" folder, respectively.
|
||||
|
||||
- IDE models [[Baiduyun]](https://pan.baidu.com/s/1jHVj2C2) [[Google drive]](https://drive.google.com/open?id=0B7TOZKXmIjU3ZTNsWGt3azcxUUU)
|
||||
|
||||
- IDE features [[Baiduyun]](https://pan.baidu.com/s/1c1TtKcw) [[Google drive]](https://drive.google.com/open?id=0B7TOZKXmIjU3ODhaRm8yN2QzRHc)
|
||||
|
||||
Using the above IDE models and IDE features, you can reproduce the results with our re-ranking method as follows:
|
||||
|
||||
- Market-1501
|
||||
|
||||
|Methods | Rank@1 | mAP|
|
||||
| -------- | ----- | ---- |
|
||||
|IDE_ResNet_50 + Euclidean | 78.92% | 55.03%|
|
||||
|IDE_ResNet_50 + XQDA | 77.58% | 56.06%|
|
||||
|
||||
For Market-1501, these results are better than those reported in our paper, since we add a dropout = 0.5 layer after pool5.
|
||||
|
||||
- CUHK03 under the new training/testing protocol
|
||||
|
||||
| | Labeled | Labeled| detected | detected|
|
||||
| -------| ----- | ---- |---- |---- |
|
||||
|Methods | Rank@1 | mAP| Rank@1 | mAP|
|
||||
|IDE_CaffeNet + Euclidean | 15.6% | 14.9%| 15.1% | 14.2%|
|
||||
|IDE_CaffeNet + XQDA | 21.9% | 20.0%|21.1% | 19.0%|
|
||||
|IDE_ResNet_50 + Euclidean | 22.2% | 21.0%|21.3% | 19.7%|
|
||||
|IDE_ResNet_50 + XQDA | 32.0% | 29.6%|31.1% | 28.2%|
|
||||
|
||||
### Contact us
|
||||
|
||||
If you have any questions about this code, please do not hesitate to contact us.
|
||||
|
||||
[Zhun Zhong](http://zhunzhong.site)
|
||||
|
||||
[Liang Zheng](http://liangzheng.com.cn)
|
||||
|
||||
=======
|
||||
# person-re-ranking
|
||||
>>>>>>> 8479ff10372e05534e4294c41347581dd73ec201
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,19 @@
|
|||
Please use the [caffe-users list](https://groups.google.com/forum/#!forum/caffe-users) for usage, installation, or modeling questions, or other requests for help.
|
||||
_Do not post such requests to Issues._ Doing so interferes with the development of Caffe.
|
||||
|
||||
Please read the [guidelines for contributing](https://github.com/BVLC/caffe/blob/master/CONTRIBUTING.md) before submitting this issue.
|
||||
|
||||
### Issue summary
|
||||
|
||||
|
||||
### Steps to reproduce
|
||||
|
||||
If you are having difficulty building Caffe or training a model, please ask the caffe-users mailing list. If you are reporting a build error that seems to be due to a bug in Caffe, please attach your build configuration (either Makefile.config or CMakeCache.txt) and the output of the make (or cmake) command.
|
||||
|
||||
### Your system configuration
|
||||
Operating system:
|
||||
Compiler:
|
||||
CUDA version (if applicable):
|
||||
CUDNN version (if applicable):
|
||||
BLAS:
|
||||
Python or MATLAB version (for pycaffe and matcaffe respectively):
|
|
@ -0,0 +1,67 @@
|
|||
dist: trusty
|
||||
sudo: required
|
||||
|
||||
language: cpp
|
||||
compiler: gcc
|
||||
|
||||
env:
|
||||
global:
|
||||
- NUM_THREADS=4
|
||||
matrix:
|
||||
# Use a build matrix to test many builds in parallel
|
||||
# envvar defaults:
|
||||
# WITH_CMAKE: false
|
||||
# WITH_PYTHON3: false
|
||||
# WITH_IO: true
|
||||
# WITH_CUDA: false
|
||||
# WITH_CUDNN: false
|
||||
- BUILD_NAME="default-make"
|
||||
# - BUILD_NAME="python3-make" WITH_PYTHON3=true
|
||||
- BUILD_NAME="no-io-make" WITH_IO=false
|
||||
- BUILD_NAME="cuda-make" WITH_CUDA=true
|
||||
- BUILD_NAME="cudnn-make" WITH_CUDA=true WITH_CUDNN=true
|
||||
|
||||
- BUILD_NAME="default-cmake" WITH_CMAKE=true
|
||||
- BUILD_NAME="python3-cmake" WITH_CMAKE=true WITH_PYTHON3=true
|
||||
- BUILD_NAME="no-io-cmake" WITH_CMAKE=true WITH_IO=false
|
||||
- BUILD_NAME="cuda-cmake" WITH_CMAKE=true WITH_CUDA=true
|
||||
- BUILD_NAME="cudnn-cmake" WITH_CMAKE=true WITH_CUDA=true WITH_CUDNN=true
|
||||
|
||||
cache:
|
||||
apt: true
|
||||
directories:
|
||||
- ~/protobuf3
|
||||
|
||||
before_install:
|
||||
- source ./scripts/travis/defaults.sh
|
||||
|
||||
install:
|
||||
- sudo -E ./scripts/travis/install-deps.sh
|
||||
- ./scripts/travis/setup-venv.sh ~/venv
|
||||
- source ~/venv/bin/activate
|
||||
- ./scripts/travis/install-python-deps.sh
|
||||
|
||||
before_script:
|
||||
- ./scripts/travis/configure.sh
|
||||
|
||||
script:
|
||||
- ./scripts/travis/build.sh
|
||||
- ./scripts/travis/test.sh
|
||||
|
||||
notifications:
|
||||
# Emails are sent to the committer's git-configured email address by default,
|
||||
# but only if they have access to the repository. To enable Travis on your
|
||||
# public fork of Caffe, just go to travis-ci.org and flip the switch on for
|
||||
# your Caffe fork. To configure your git email address, use:
|
||||
# git config --global user.email me@example.com
|
||||
email:
|
||||
on_success: always
|
||||
on_failure: always
|
||||
|
||||
# IRC notifications disabled by default.
|
||||
# Uncomment next 5 lines to send notifications to chat.freenode.net#caffe
|
||||
# irc:
|
||||
# channels:
|
||||
# - "chat.freenode.net#caffe"
|
||||
# template:
|
||||
# - "%{repository}/%{branch} (%{commit} - %{author}): %{message}"
|
|
@ -0,0 +1,110 @@
|
|||
cmake_minimum_required(VERSION 2.8.7)
|
||||
if(POLICY CMP0046)
|
||||
cmake_policy(SET CMP0046 NEW)
|
||||
endif()
|
||||
if(POLICY CMP0054)
|
||||
cmake_policy(SET CMP0054 NEW)
|
||||
endif()
|
||||
|
||||
# ---[ Caffe project
|
||||
project(Caffe C CXX)
|
||||
|
||||
# ---[ Caffe version
|
||||
set(CAFFE_TARGET_VERSION "1.0.0-rc5" CACHE STRING "Caffe logical version")
|
||||
set(CAFFE_TARGET_SOVERSION "1.0.0-rc5" CACHE STRING "Caffe soname version")
|
||||
add_definitions(-DCAFFE_VERSION=${CAFFE_TARGET_VERSION})
|
||||
|
||||
# ---[ Using cmake scripts and modules
|
||||
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake/Modules)
|
||||
|
||||
include(ExternalProject)
|
||||
|
||||
include(cmake/Utils.cmake)
|
||||
include(cmake/Targets.cmake)
|
||||
include(cmake/Misc.cmake)
|
||||
include(cmake/Summary.cmake)
|
||||
include(cmake/ConfigGen.cmake)
|
||||
|
||||
# ---[ Options
|
||||
caffe_option(CPU_ONLY "Build Caffe without CUDA support" OFF) # TODO: rename to USE_CUDA
|
||||
caffe_option(USE_CUDNN "Build Caffe with cuDNN library support" ON IF NOT CPU_ONLY)
|
||||
caffe_option(USE_NCCL "Build Caffe with NCCL library support" OFF)
|
||||
caffe_option(BUILD_SHARED_LIBS "Build shared libraries" ON)
|
||||
caffe_option(BUILD_python "Build Python wrapper" ON)
|
||||
set(python_version "2" CACHE STRING "Specify which Python version to use")
|
||||
caffe_option(BUILD_matlab "Build Matlab wrapper" OFF IF UNIX OR APPLE)
|
||||
caffe_option(BUILD_docs "Build documentation" ON IF UNIX OR APPLE)
|
||||
caffe_option(BUILD_python_layer "Build the Caffe Python layer" ON)
|
||||
caffe_option(USE_OPENCV "Build with OpenCV support" ON)
|
||||
caffe_option(USE_LEVELDB "Build with levelDB" ON)
|
||||
caffe_option(USE_LMDB "Build with lmdb" ON)
|
||||
caffe_option(ALLOW_LMDB_NOLOCK "Allow MDB_NOLOCK when reading LMDB files (only if necessary)" OFF)
|
||||
caffe_option(USE_OPENMP "Link with OpenMP (when your BLAS wants OpenMP and you get linker errors)" OFF)
|
||||
|
||||
# ---[ Dependencies
|
||||
include(cmake/Dependencies.cmake)
|
||||
|
||||
# ---[ Flags
|
||||
if(UNIX OR APPLE)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC -Wall")
|
||||
endif()
|
||||
|
||||
caffe_set_caffe_link()
|
||||
|
||||
if(USE_libstdcpp)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libstdc++")
|
||||
message("-- Warning: forcing libstdc++ (controlled by USE_libstdcpp option in cmake)")
|
||||
endif()
|
||||
|
||||
# ---[ Warnings
|
||||
caffe_warnings_disable(CMAKE_CXX_FLAGS -Wno-sign-compare -Wno-uninitialized)
|
||||
|
||||
# ---[ Config generation
|
||||
configure_file(cmake/Templates/caffe_config.h.in "${PROJECT_BINARY_DIR}/caffe_config.h")
|
||||
|
||||
# ---[ Includes
|
||||
set(Caffe_INCLUDE_DIR ${PROJECT_SOURCE_DIR}/include)
|
||||
set(Caffe_SRC_DIR ${PROJECT_SOURCE_DIR}/src)
|
||||
include_directories(${PROJECT_BINARY_DIR})
|
||||
|
||||
# ---[ Includes & defines for CUDA
|
||||
|
||||
# cuda_compile() does not have per-call dependencies or include pathes
|
||||
# (cuda_compile() has per-call flags, but we set them here too for clarity)
|
||||
#
|
||||
# list(REMOVE_ITEM ...) invocations remove PRIVATE and PUBLIC keywords from collected definitions and include pathes
|
||||
if(HAVE_CUDA)
|
||||
# pass include pathes to cuda_include_directories()
|
||||
set(Caffe_ALL_INCLUDE_DIRS ${Caffe_INCLUDE_DIRS})
|
||||
list(REMOVE_ITEM Caffe_ALL_INCLUDE_DIRS PRIVATE PUBLIC)
|
||||
cuda_include_directories(${Caffe_INCLUDE_DIR} ${Caffe_SRC_DIR} ${Caffe_ALL_INCLUDE_DIRS})
|
||||
|
||||
# add definitions to nvcc flags directly
|
||||
set(Caffe_ALL_DEFINITIONS ${Caffe_DEFINITIONS})
|
||||
list(REMOVE_ITEM Caffe_ALL_DEFINITIONS PRIVATE PUBLIC)
|
||||
list(APPEND CUDA_NVCC_FLAGS ${Caffe_ALL_DEFINITIONS})
|
||||
endif()
|
||||
|
||||
# ---[ Subdirectories
|
||||
add_subdirectory(src/gtest)
|
||||
add_subdirectory(src/caffe)
|
||||
add_subdirectory(tools)
|
||||
add_subdirectory(examples)
|
||||
add_subdirectory(python)
|
||||
add_subdirectory(matlab)
|
||||
add_subdirectory(docs)
|
||||
|
||||
# ---[ Linter target
|
||||
add_custom_target(lint COMMAND ${CMAKE_COMMAND} -P ${PROJECT_SOURCE_DIR}/cmake/lint.cmake)
|
||||
|
||||
# ---[ pytest target
|
||||
if(BUILD_python)
|
||||
add_custom_target(pytest COMMAND python${python_version} -m unittest discover -s caffe/test WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/python )
|
||||
add_dependencies(pytest pycaffe)
|
||||
endif()
|
||||
|
||||
# ---[ Configuration summary
|
||||
caffe_print_configuration_summary()
|
||||
|
||||
# ---[ Export configs generation
|
||||
caffe_generate_export_configs()
|
|
@ -0,0 +1,30 @@
|
|||
# Contributing
|
||||
|
||||
## Issues
|
||||
|
||||
Specific Caffe design and development issues, bugs, and feature requests are maintained by GitHub Issues.
|
||||
|
||||
_Please do not post usage, installation, or modeling questions, or other requests for help to Issues._
|
||||
Use the [caffe-users list](https://groups.google.com/forum/#!forum/caffe-users) instead. This helps developers maintain a clear, uncluttered, and efficient view of the state of Caffe.
|
||||
|
||||
When reporting a bug, it's most helpful to provide the following information, where applicable:
|
||||
|
||||
* What steps reproduce the bug?
|
||||
* Can you reproduce the bug using the latest [master](https://github.com/BVLC/caffe/tree/master), compiled with the `DEBUG` make option?
|
||||
* What hardware and operating system/distribution are you running?
|
||||
* If the bug is a crash, provide the backtrace (usually printed by Caffe; always obtainable with `gdb`).
|
||||
|
||||
Try to give your issue a title that is succinct and specific. The devs will rename issues as needed to keep track of them.
|
||||
|
||||
## Pull Requests
|
||||
|
||||
Caffe welcomes all contributions.
|
||||
|
||||
See the [contributing guide](http://caffe.berkeleyvision.org/development.html) for details.
|
||||
|
||||
Briefly: read commit by commit, a PR should tell a clean, compelling story of _one_ improvement to Caffe. In particular:
|
||||
|
||||
* A PR should do one clear thing that obviously improves Caffe, and nothing more. Making many smaller PRs is better than making one large PR; review effort is superlinear in the amount of code involved.
|
||||
* Similarly, each commit should be a small, atomic change representing one step in development. PRs should be made of many commits where appropriate.
|
||||
* Please do rewrite PR history to be clean rather than chronological. Within-PR bugfixes, style cleanups, reversions, etc. should be squashed and should not appear in merged PR history.
|
||||
* Anything nonobvious from the code should be explained in comments, commit messages, or the PR description, as appropriate.
|
|
@ -0,0 +1,19 @@
|
|||
# Contributors
|
||||
|
||||
Caffe is developed by a core set of BVLC members and the open-source community.
|
||||
|
||||
We thank all of our [contributors](https://github.com/BVLC/caffe/graphs/contributors)!
|
||||
|
||||
**For the detailed history of contributions** of a given file, try
|
||||
|
||||
git blame file
|
||||
|
||||
to see line-by-line credits and
|
||||
|
||||
git log --follow file
|
||||
|
||||
to see the change log even across renames and rewrites.
|
||||
|
||||
Please refer to the [acknowledgements](http://caffe.berkeleyvision.org/#acknowledgements) on the Caffe site for further details.
|
||||
|
||||
**Copyright** is held by the original contributor according to the versioning history; see LICENSE.
|
|
@ -0,0 +1,7 @@
|
|||
# Installation
|
||||
|
||||
See http://caffe.berkeleyvision.org/installation.html for the latest
|
||||
installation instructions.
|
||||
|
||||
Check the users group in case you need help:
|
||||
https://groups.google.com/forum/#!forum/caffe-users
|
|
@ -0,0 +1,44 @@
|
|||
COPYRIGHT
|
||||
|
||||
All contributions by the University of California:
|
||||
Copyright (c) 2014-2017 The Regents of the University of California (Regents)
|
||||
All rights reserved.
|
||||
|
||||
All other contributions:
|
||||
Copyright (c) 2014-2017, the respective contributors
|
||||
All rights reserved.
|
||||
|
||||
Caffe uses a shared copyright model: each contributor holds copyright over
|
||||
their contributions to Caffe. The project versioning records all such
|
||||
contribution and copyright details. If a contributor wants to further mark
|
||||
their specific copyright on a particular contribution, they should indicate
|
||||
their copyright solely in the commit message of the change when it is
|
||||
committed.
|
||||
|
||||
LICENSE
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
CONTRIBUTION AGREEMENT
|
||||
|
||||
By contributing to the BVLC/caffe repository through pull-request, comment,
|
||||
or otherwise, the contributor releases their content to the
|
||||
license and copyright terms herein.
|
|
@ -0,0 +1,699 @@
|
|||
PROJECT := caffe
|
||||
|
||||
CONFIG_FILE := Makefile.config
|
||||
# Explicitly check for the config file, otherwise make -k will proceed anyway.
|
||||
ifeq ($(wildcard $(CONFIG_FILE)),)
|
||||
$(error $(CONFIG_FILE) not found. See $(CONFIG_FILE).example.)
|
||||
endif
|
||||
include $(CONFIG_FILE)
|
||||
|
||||
BUILD_DIR_LINK := $(BUILD_DIR)
|
||||
ifeq ($(RELEASE_BUILD_DIR),)
|
||||
RELEASE_BUILD_DIR := .$(BUILD_DIR)_release
|
||||
endif
|
||||
ifeq ($(DEBUG_BUILD_DIR),)
|
||||
DEBUG_BUILD_DIR := .$(BUILD_DIR)_debug
|
||||
endif
|
||||
|
||||
DEBUG ?= 0
|
||||
ifeq ($(DEBUG), 1)
|
||||
BUILD_DIR := $(DEBUG_BUILD_DIR)
|
||||
OTHER_BUILD_DIR := $(RELEASE_BUILD_DIR)
|
||||
else
|
||||
BUILD_DIR := $(RELEASE_BUILD_DIR)
|
||||
OTHER_BUILD_DIR := $(DEBUG_BUILD_DIR)
|
||||
endif
|
||||
|
||||
# All of the directories containing code.
|
||||
SRC_DIRS := $(shell find * -type d -exec bash -c "find {} -maxdepth 1 \
|
||||
\( -name '*.cpp' -o -name '*.proto' \) | grep -q ." \; -print)
|
||||
|
||||
# The target shared library name
|
||||
LIBRARY_NAME := $(PROJECT)
|
||||
LIB_BUILD_DIR := $(BUILD_DIR)/lib
|
||||
STATIC_NAME := $(LIB_BUILD_DIR)/lib$(LIBRARY_NAME).a
|
||||
DYNAMIC_VERSION_MAJOR := 1
|
||||
DYNAMIC_VERSION_MINOR := 0
|
||||
DYNAMIC_VERSION_REVISION := 0-rc5
|
||||
DYNAMIC_NAME_SHORT := lib$(LIBRARY_NAME).so
|
||||
#DYNAMIC_SONAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR)
|
||||
DYNAMIC_VERSIONED_NAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)
|
||||
DYNAMIC_NAME := $(LIB_BUILD_DIR)/$(DYNAMIC_VERSIONED_NAME_SHORT)
|
||||
COMMON_FLAGS += -DCAFFE_VERSION=$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)
|
||||
|
||||
##############################
|
||||
# Get all source files
|
||||
##############################
|
||||
# CXX_SRCS are the source files excluding the test ones.
|
||||
CXX_SRCS := $(shell find src/$(PROJECT) ! -name "test_*.cpp" -name "*.cpp")
|
||||
# CU_SRCS are the cuda source files
|
||||
CU_SRCS := $(shell find src/$(PROJECT) ! -name "test_*.cu" -name "*.cu")
|
||||
# TEST_SRCS are the test source files
|
||||
TEST_MAIN_SRC := src/$(PROJECT)/test/test_caffe_main.cpp
|
||||
TEST_SRCS := $(shell find src/$(PROJECT) -name "test_*.cpp")
|
||||
TEST_SRCS := $(filter-out $(TEST_MAIN_SRC), $(TEST_SRCS))
|
||||
TEST_CU_SRCS := $(shell find src/$(PROJECT) -name "test_*.cu")
|
||||
GTEST_SRC := src/gtest/gtest-all.cpp
|
||||
# TOOL_SRCS are the source files for the tool binaries
|
||||
TOOL_SRCS := $(shell find tools -name "*.cpp")
|
||||
# EXAMPLE_SRCS are the source files for the example binaries
|
||||
EXAMPLE_SRCS := $(shell find examples -name "*.cpp")
|
||||
# BUILD_INCLUDE_DIR contains any generated header files we want to include.
|
||||
BUILD_INCLUDE_DIR := $(BUILD_DIR)/src
|
||||
# PROTO_SRCS are the protocol buffer definitions
|
||||
PROTO_SRC_DIR := src/$(PROJECT)/proto
|
||||
PROTO_SRCS := $(wildcard $(PROTO_SRC_DIR)/*.proto)
|
||||
# PROTO_BUILD_DIR will contain the .cc and obj files generated from
|
||||
# PROTO_SRCS; PROTO_BUILD_INCLUDE_DIR will contain the .h header files
|
||||
PROTO_BUILD_DIR := $(BUILD_DIR)/$(PROTO_SRC_DIR)
|
||||
PROTO_BUILD_INCLUDE_DIR := $(BUILD_INCLUDE_DIR)/$(PROJECT)/proto
|
||||
# NONGEN_CXX_SRCS includes all source/header files except those generated
|
||||
# automatically (e.g., by proto).
|
||||
NONGEN_CXX_SRCS := $(shell find \
|
||||
src/$(PROJECT) \
|
||||
include/$(PROJECT) \
|
||||
python/$(PROJECT) \
|
||||
matlab/+$(PROJECT)/private \
|
||||
examples \
|
||||
tools \
|
||||
-name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh")
|
||||
LINT_SCRIPT := scripts/cpp_lint.py
|
||||
LINT_OUTPUT_DIR := $(BUILD_DIR)/.lint
|
||||
LINT_EXT := lint.txt
|
||||
LINT_OUTPUTS := $(addsuffix .$(LINT_EXT), $(addprefix $(LINT_OUTPUT_DIR)/, $(NONGEN_CXX_SRCS)))
|
||||
EMPTY_LINT_REPORT := $(BUILD_DIR)/.$(LINT_EXT)
|
||||
NONEMPTY_LINT_REPORT := $(BUILD_DIR)/$(LINT_EXT)
|
||||
# PY$(PROJECT)_SRC is the python wrapper for $(PROJECT)
|
||||
PY$(PROJECT)_SRC := python/$(PROJECT)/_$(PROJECT).cpp
|
||||
PY$(PROJECT)_SO := python/$(PROJECT)/_$(PROJECT).so
|
||||
PY$(PROJECT)_HXX := include/$(PROJECT)/layers/python_layer.hpp
|
||||
# MAT$(PROJECT)_SRC is the mex entrance point of matlab package for $(PROJECT)
|
||||
MAT$(PROJECT)_SRC := matlab/+$(PROJECT)/private/$(PROJECT)_.cpp
|
||||
ifneq ($(MATLAB_DIR),)
|
||||
MAT_SO_EXT := $(shell $(MATLAB_DIR)/bin/mexext)
|
||||
endif
|
||||
MAT$(PROJECT)_SO := matlab/+$(PROJECT)/private/$(PROJECT)_.$(MAT_SO_EXT)
|
||||
|
||||
##############################
|
||||
# Derive generated files
|
||||
##############################
|
||||
# The generated files for protocol buffers
|
||||
PROTO_GEN_HEADER_SRCS := $(addprefix $(PROTO_BUILD_DIR)/, \
|
||||
$(notdir ${PROTO_SRCS:.proto=.pb.h}))
|
||||
PROTO_GEN_HEADER := $(addprefix $(PROTO_BUILD_INCLUDE_DIR)/, \
|
||||
$(notdir ${PROTO_SRCS:.proto=.pb.h}))
|
||||
PROTO_GEN_CC := $(addprefix $(BUILD_DIR)/, ${PROTO_SRCS:.proto=.pb.cc})
|
||||
PY_PROTO_BUILD_DIR := python/$(PROJECT)/proto
|
||||
PY_PROTO_INIT := python/$(PROJECT)/proto/__init__.py
|
||||
PROTO_GEN_PY := $(foreach file,${PROTO_SRCS:.proto=_pb2.py}, \
|
||||
$(PY_PROTO_BUILD_DIR)/$(notdir $(file)))
|
||||
# The objects corresponding to the source files
|
||||
# These objects will be linked into the final shared library, so we
|
||||
# exclude the tool, example, and test objects.
|
||||
CXX_OBJS := $(addprefix $(BUILD_DIR)/, ${CXX_SRCS:.cpp=.o})
|
||||
CU_OBJS := $(addprefix $(BUILD_DIR)/cuda/, ${CU_SRCS:.cu=.o})
|
||||
PROTO_OBJS := ${PROTO_GEN_CC:.cc=.o}
|
||||
OBJS := $(PROTO_OBJS) $(CXX_OBJS) $(CU_OBJS)
|
||||
# tool, example, and test objects
|
||||
TOOL_OBJS := $(addprefix $(BUILD_DIR)/, ${TOOL_SRCS:.cpp=.o})
|
||||
TOOL_BUILD_DIR := $(BUILD_DIR)/tools
|
||||
TEST_CXX_BUILD_DIR := $(BUILD_DIR)/src/$(PROJECT)/test
|
||||
TEST_CU_BUILD_DIR := $(BUILD_DIR)/cuda/src/$(PROJECT)/test
|
||||
TEST_CXX_OBJS := $(addprefix $(BUILD_DIR)/, ${TEST_SRCS:.cpp=.o})
|
||||
TEST_CU_OBJS := $(addprefix $(BUILD_DIR)/cuda/, ${TEST_CU_SRCS:.cu=.o})
|
||||
TEST_OBJS := $(TEST_CXX_OBJS) $(TEST_CU_OBJS)
|
||||
GTEST_OBJ := $(addprefix $(BUILD_DIR)/, ${GTEST_SRC:.cpp=.o})
|
||||
EXAMPLE_OBJS := $(addprefix $(BUILD_DIR)/, ${EXAMPLE_SRCS:.cpp=.o})
|
||||
# Output files for automatic dependency generation
|
||||
DEPS := ${CXX_OBJS:.o=.d} ${CU_OBJS:.o=.d} ${TEST_CXX_OBJS:.o=.d} \
|
||||
${TEST_CU_OBJS:.o=.d} $(BUILD_DIR)/${MAT$(PROJECT)_SO:.$(MAT_SO_EXT)=.d}
|
||||
# tool, example, and test bins
|
||||
TOOL_BINS := ${TOOL_OBJS:.o=.bin}
|
||||
EXAMPLE_BINS := ${EXAMPLE_OBJS:.o=.bin}
|
||||
# symlinks to tool bins without the ".bin" extension
|
||||
TOOL_BIN_LINKS := ${TOOL_BINS:.bin=}
|
||||
# Put the test binaries in build/test for convenience.
|
||||
TEST_BIN_DIR := $(BUILD_DIR)/test
|
||||
TEST_CU_BINS := $(addsuffix .testbin,$(addprefix $(TEST_BIN_DIR)/, \
|
||||
$(foreach obj,$(TEST_CU_OBJS),$(basename $(notdir $(obj))))))
|
||||
TEST_CXX_BINS := $(addsuffix .testbin,$(addprefix $(TEST_BIN_DIR)/, \
|
||||
$(foreach obj,$(TEST_CXX_OBJS),$(basename $(notdir $(obj))))))
|
||||
TEST_BINS := $(TEST_CXX_BINS) $(TEST_CU_BINS)
|
||||
# TEST_ALL_BIN is the test binary that links caffe dynamically.
|
||||
TEST_ALL_BIN := $(TEST_BIN_DIR)/test_all.testbin
|
||||
|
||||
##############################
|
||||
# Derive compiler warning dump locations
|
||||
##############################
|
||||
WARNS_EXT := warnings.txt
|
||||
CXX_WARNS := $(addprefix $(BUILD_DIR)/, ${CXX_SRCS:.cpp=.o.$(WARNS_EXT)})
|
||||
CU_WARNS := $(addprefix $(BUILD_DIR)/cuda/, ${CU_SRCS:.cu=.o.$(WARNS_EXT)})
|
||||
TOOL_WARNS := $(addprefix $(BUILD_DIR)/, ${TOOL_SRCS:.cpp=.o.$(WARNS_EXT)})
|
||||
EXAMPLE_WARNS := $(addprefix $(BUILD_DIR)/, ${EXAMPLE_SRCS:.cpp=.o.$(WARNS_EXT)})
|
||||
TEST_WARNS := $(addprefix $(BUILD_DIR)/, ${TEST_SRCS:.cpp=.o.$(WARNS_EXT)})
|
||||
TEST_CU_WARNS := $(addprefix $(BUILD_DIR)/cuda/, ${TEST_CU_SRCS:.cu=.o.$(WARNS_EXT)})
|
||||
ALL_CXX_WARNS := $(CXX_WARNS) $(TOOL_WARNS) $(EXAMPLE_WARNS) $(TEST_WARNS)
|
||||
ALL_CU_WARNS := $(CU_WARNS) $(TEST_CU_WARNS)
|
||||
ALL_WARNS := $(ALL_CXX_WARNS) $(ALL_CU_WARNS)
|
||||
|
||||
EMPTY_WARN_REPORT := $(BUILD_DIR)/.$(WARNS_EXT)
|
||||
NONEMPTY_WARN_REPORT := $(BUILD_DIR)/$(WARNS_EXT)
|
||||
|
||||
##############################
|
||||
# Derive include and lib directories
|
||||
##############################
|
||||
CUDA_INCLUDE_DIR := $(CUDA_DIR)/include
|
||||
|
||||
CUDA_LIB_DIR :=
|
||||
# add <cuda>/lib64 only if it exists
|
||||
ifneq ("$(wildcard $(CUDA_DIR)/lib64)","")
|
||||
CUDA_LIB_DIR += $(CUDA_DIR)/lib64
|
||||
endif
|
||||
CUDA_LIB_DIR += $(CUDA_DIR)/lib
|
||||
|
||||
INCLUDE_DIRS += $(BUILD_INCLUDE_DIR) ./src ./include
|
||||
ifneq ($(CPU_ONLY), 1)
|
||||
INCLUDE_DIRS += $(CUDA_INCLUDE_DIR)
|
||||
LIBRARY_DIRS += $(CUDA_LIB_DIR)
|
||||
LIBRARIES := cudart cublas curand
|
||||
endif
|
||||
|
||||
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5
|
||||
|
||||
# handle IO dependencies
|
||||
USE_LEVELDB ?= 1
|
||||
USE_LMDB ?= 1
|
||||
USE_OPENCV ?= 1
|
||||
|
||||
ifeq ($(USE_LEVELDB), 1)
|
||||
LIBRARIES += leveldb snappy
|
||||
endif
|
||||
ifeq ($(USE_LMDB), 1)
|
||||
LIBRARIES += lmdb
|
||||
endif
|
||||
ifeq ($(USE_OPENCV), 1)
|
||||
LIBRARIES += opencv_core opencv_highgui opencv_imgproc
|
||||
|
||||
ifeq ($(OPENCV_VERSION), 3)
|
||||
LIBRARIES += opencv_imgcodecs
|
||||
endif
|
||||
|
||||
endif
|
||||
PYTHON_LIBRARIES ?= boost_python python2.7
|
||||
WARNINGS := -Wall -Wno-sign-compare
|
||||
|
||||
##############################
|
||||
# Set build directories
|
||||
##############################
|
||||
|
||||
DISTRIBUTE_DIR ?= distribute
|
||||
DISTRIBUTE_SUBDIRS := $(DISTRIBUTE_DIR)/bin $(DISTRIBUTE_DIR)/lib
|
||||
DIST_ALIASES := dist
|
||||
ifneq ($(strip $(DISTRIBUTE_DIR)),distribute)
|
||||
DIST_ALIASES += distribute
|
||||
endif
|
||||
|
||||
ALL_BUILD_DIRS := $(sort $(BUILD_DIR) $(addprefix $(BUILD_DIR)/, $(SRC_DIRS)) \
|
||||
$(addprefix $(BUILD_DIR)/cuda/, $(SRC_DIRS)) \
|
||||
$(LIB_BUILD_DIR) $(TEST_BIN_DIR) $(PY_PROTO_BUILD_DIR) $(LINT_OUTPUT_DIR) \
|
||||
$(DISTRIBUTE_SUBDIRS) $(PROTO_BUILD_INCLUDE_DIR))
|
||||
|
||||
##############################
|
||||
# Set directory for Doxygen-generated documentation
|
||||
##############################
|
||||
DOXYGEN_CONFIG_FILE ?= ./.Doxyfile
|
||||
# should be the same as OUTPUT_DIRECTORY in the .Doxyfile
|
||||
DOXYGEN_OUTPUT_DIR ?= ./doxygen
|
||||
DOXYGEN_COMMAND ?= doxygen
|
||||
# All the files that might have Doxygen documentation.
|
||||
DOXYGEN_SOURCES := $(shell find \
|
||||
src/$(PROJECT) \
|
||||
include/$(PROJECT) \
|
||||
python/ \
|
||||
matlab/ \
|
||||
examples \
|
||||
tools \
|
||||
-name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh" -or \
|
||||
-name "*.py" -or -name "*.m")
|
||||
DOXYGEN_SOURCES += $(DOXYGEN_CONFIG_FILE)
|
||||
|
||||
|
||||
##############################
|
||||
# Configure build
|
||||
##############################
|
||||
|
||||
# Determine platform
|
||||
UNAME := $(shell uname -s)
|
||||
ifeq ($(UNAME), Linux)
|
||||
LINUX := 1
|
||||
else ifeq ($(UNAME), Darwin)
|
||||
OSX := 1
|
||||
OSX_MAJOR_VERSION := $(shell sw_vers -productVersion | cut -f 1 -d .)
|
||||
OSX_MINOR_VERSION := $(shell sw_vers -productVersion | cut -f 2 -d .)
|
||||
endif
|
||||
|
||||
# Linux
|
||||
ifeq ($(LINUX), 1)
|
||||
CXX ?= /usr/bin/g++
|
||||
GCCVERSION := $(shell $(CXX) -dumpversion | cut -f1,2 -d.)
|
||||
# older versions of gcc are too dumb to build boost with -Wuninitalized
|
||||
ifeq ($(shell echo | awk '{exit $(GCCVERSION) < 4.6;}'), 1)
|
||||
WARNINGS += -Wno-uninitialized
|
||||
endif
|
||||
# boost::thread is reasonably called boost_thread (compare OS X)
|
||||
# We will also explicitly add stdc++ to the link target.
|
||||
LIBRARIES += boost_thread stdc++
|
||||
VERSIONFLAGS += -Wl,-soname,$(DYNAMIC_VERSIONED_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../lib
|
||||
endif
|
||||
|
||||
# OS X:
|
||||
# clang++ instead of g++
|
||||
# libstdc++ for NVCC compatibility on OS X >= 10.9 with CUDA < 7.0
|
||||
ifeq ($(OSX), 1)
|
||||
CXX := /usr/bin/clang++
|
||||
ifneq ($(CPU_ONLY), 1)
|
||||
CUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release [0-9.]*' | tr -d '[a-z ]')
|
||||
ifeq ($(shell echo | awk '{exit $(CUDA_VERSION) < 7.0;}'), 1)
|
||||
CXXFLAGS += -stdlib=libstdc++
|
||||
LINKFLAGS += -stdlib=libstdc++
|
||||
endif
|
||||
# clang throws this warning for cuda headers
|
||||
WARNINGS += -Wno-unneeded-internal-declaration
|
||||
# 10.11 strips DYLD_* env vars so link CUDA (rpath is available on 10.5+)
|
||||
OSX_10_OR_LATER := $(shell [ $(OSX_MAJOR_VERSION) -ge 10 ] && echo true)
|
||||
OSX_10_5_OR_LATER := $(shell [ $(OSX_MINOR_VERSION) -ge 5 ] && echo true)
|
||||
ifeq ($(OSX_10_OR_LATER),true)
|
||||
ifeq ($(OSX_10_5_OR_LATER),true)
|
||||
LDFLAGS += -Wl,-rpath,$(CUDA_LIB_DIR)
|
||||
endif
|
||||
endif
|
||||
endif
|
||||
# gtest needs to use its own tuple to not conflict with clang
|
||||
COMMON_FLAGS += -DGTEST_USE_OWN_TR1_TUPLE=1
|
||||
# boost::thread is called boost_thread-mt to mark multithreading on OS X
|
||||
LIBRARIES += boost_thread-mt
|
||||
# we need to explicitly ask for the rpath to be obeyed
|
||||
ORIGIN := @loader_path
|
||||
VERSIONFLAGS += -Wl,-install_name,@rpath/$(DYNAMIC_VERSIONED_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../../build/lib
|
||||
else
|
||||
ORIGIN := \$$ORIGIN
|
||||
endif
|
||||
|
||||
# Custom compiler
|
||||
ifdef CUSTOM_CXX
|
||||
CXX := $(CUSTOM_CXX)
|
||||
endif
|
||||
|
||||
# Static linking
|
||||
ifneq (,$(findstring clang++,$(CXX)))
|
||||
STATIC_LINK_COMMAND := -Wl,-force_load $(STATIC_NAME)
|
||||
else ifneq (,$(findstring g++,$(CXX)))
|
||||
STATIC_LINK_COMMAND := -Wl,--whole-archive $(STATIC_NAME) -Wl,--no-whole-archive
|
||||
else
|
||||
# The following line must not be indented with a tab, since we are not inside a target
|
||||
$(error Cannot static link with the $(CXX) compiler)
|
||||
endif
|
||||
|
||||
# Debugging
|
||||
ifeq ($(DEBUG), 1)
|
||||
COMMON_FLAGS += -DDEBUG -g -O0
|
||||
NVCCFLAGS += -G
|
||||
else
|
||||
COMMON_FLAGS += -DNDEBUG -O2
|
||||
endif
|
||||
|
||||
# cuDNN acceleration configuration.
|
||||
ifeq ($(USE_CUDNN), 1)
|
||||
LIBRARIES += cudnn
|
||||
COMMON_FLAGS += -DUSE_CUDNN
|
||||
endif
|
||||
|
||||
# NCCL acceleration configuration
|
||||
ifeq ($(USE_NCCL), 1)
|
||||
LIBRARIES += nccl
|
||||
COMMON_FLAGS += -DUSE_NCCL
|
||||
endif
|
||||
|
||||
# configure IO libraries
|
||||
ifeq ($(USE_OPENCV), 1)
|
||||
COMMON_FLAGS += -DUSE_OPENCV
|
||||
endif
|
||||
ifeq ($(USE_LEVELDB), 1)
|
||||
COMMON_FLAGS += -DUSE_LEVELDB
|
||||
endif
|
||||
ifeq ($(USE_LMDB), 1)
|
||||
COMMON_FLAGS += -DUSE_LMDB
|
||||
ifeq ($(ALLOW_LMDB_NOLOCK), 1)
|
||||
COMMON_FLAGS += -DALLOW_LMDB_NOLOCK
|
||||
endif
|
||||
endif
|
||||
|
||||
# CPU-only configuration
|
||||
ifeq ($(CPU_ONLY), 1)
|
||||
OBJS := $(PROTO_OBJS) $(CXX_OBJS)
|
||||
TEST_OBJS := $(TEST_CXX_OBJS)
|
||||
TEST_BINS := $(TEST_CXX_BINS)
|
||||
ALL_WARNS := $(ALL_CXX_WARNS)
|
||||
TEST_FILTER := --gtest_filter="-*GPU*"
|
||||
COMMON_FLAGS += -DCPU_ONLY
|
||||
endif
|
||||
|
||||
# Python layer support
|
||||
ifeq ($(WITH_PYTHON_LAYER), 1)
|
||||
COMMON_FLAGS += -DWITH_PYTHON_LAYER
|
||||
LIBRARIES += $(PYTHON_LIBRARIES)
|
||||
endif
|
||||
|
||||
# BLAS configuration (default = ATLAS)
|
||||
BLAS ?= atlas
|
||||
ifeq ($(BLAS), mkl)
|
||||
# MKL
|
||||
LIBRARIES += mkl_rt
|
||||
COMMON_FLAGS += -DUSE_MKL
|
||||
MKLROOT ?= /opt/intel/mkl
|
||||
BLAS_INCLUDE ?= $(MKLROOT)/include
|
||||
BLAS_LIB ?= $(MKLROOT)/lib $(MKLROOT)/lib/intel64
|
||||
else ifeq ($(BLAS), open)
|
||||
# OpenBLAS
|
||||
LIBRARIES += openblas
|
||||
else
|
||||
# ATLAS
|
||||
ifeq ($(LINUX), 1)
|
||||
ifeq ($(BLAS), atlas)
|
||||
# Linux simply has cblas and atlas
|
||||
LIBRARIES += cblas atlas
|
||||
endif
|
||||
else ifeq ($(OSX), 1)
|
||||
# OS X packages atlas as the vecLib framework
|
||||
LIBRARIES += cblas
|
||||
# 10.10 has accelerate while 10.9 has veclib
|
||||
XCODE_CLT_VER := $(shell pkgutil --pkg-info=com.apple.pkg.CLTools_Executables | grep 'version' | sed 's/[^0-9]*\([0-9]\).*/\1/')
|
||||
XCODE_CLT_GEQ_7 := $(shell [ $(XCODE_CLT_VER) -gt 6 ] && echo 1)
|
||||
XCODE_CLT_GEQ_6 := $(shell [ $(XCODE_CLT_VER) -gt 5 ] && echo 1)
|
||||
ifeq ($(XCODE_CLT_GEQ_7), 1)
|
||||
BLAS_INCLUDE ?= /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/$(shell ls /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/ | sort | tail -1)/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/Headers
|
||||
else ifeq ($(XCODE_CLT_GEQ_6), 1)
|
||||
BLAS_INCLUDE ?= /System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/
|
||||
LDFLAGS += -framework Accelerate
|
||||
else
|
||||
BLAS_INCLUDE ?= /System/Library/Frameworks/vecLib.framework/Versions/Current/Headers/
|
||||
LDFLAGS += -framework vecLib
|
||||
endif
|
||||
endif
|
||||
endif
|
||||
INCLUDE_DIRS += $(BLAS_INCLUDE)
|
||||
LIBRARY_DIRS += $(BLAS_LIB)
|
||||
|
||||
LIBRARY_DIRS += $(LIB_BUILD_DIR)
|
||||
|
||||
# Automatic dependency generation (nvcc is handled separately)
|
||||
CXXFLAGS += -MMD -MP
|
||||
|
||||
# Complete build flags.
|
||||
COMMON_FLAGS += $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
|
||||
CXXFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)
|
||||
NVCCFLAGS += -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
|
||||
# mex may invoke an older gcc that is too liberal with -Wuninitalized
|
||||
MATLAB_CXXFLAGS := $(CXXFLAGS) -Wno-uninitialized
|
||||
LINKFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)
|
||||
|
||||
USE_PKG_CONFIG ?= 0
|
||||
ifeq ($(USE_PKG_CONFIG), 1)
|
||||
PKG_CONFIG := $(shell pkg-config opencv --libs)
|
||||
else
|
||||
PKG_CONFIG :=
|
||||
endif
|
||||
LDFLAGS += $(foreach librarydir,$(LIBRARY_DIRS),-L$(librarydir)) $(PKG_CONFIG) \
|
||||
$(foreach library,$(LIBRARIES),-l$(library))
|
||||
PYTHON_LDFLAGS := $(LDFLAGS) $(foreach library,$(PYTHON_LIBRARIES),-l$(library))
|
||||
|
||||
# 'superclean' target recursively* deletes all files ending with an extension
|
||||
# in $(SUPERCLEAN_EXTS) below. This may be useful if you've built older
|
||||
# versions of Caffe that do not place all generated files in a location known
|
||||
# to the 'clean' target.
|
||||
#
|
||||
# 'supercleanlist' will list the files to be deleted by make superclean.
|
||||
#
|
||||
# * Recursive with the exception that symbolic links are never followed, per the
|
||||
# default behavior of 'find'.
|
||||
SUPERCLEAN_EXTS := .so .a .o .bin .testbin .pb.cc .pb.h _pb2.py .cuo
|
||||
|
||||
# Set the sub-targets of the 'everything' target.
|
||||
EVERYTHING_TARGETS := all py$(PROJECT) test warn lint
|
||||
# Only build matcaffe as part of "everything" if MATLAB_DIR is specified.
|
||||
ifneq ($(MATLAB_DIR),)
|
||||
EVERYTHING_TARGETS += mat$(PROJECT)
|
||||
endif
|
||||
|
||||
##############################
|
||||
# Define build targets
|
||||
##############################
|
||||
.PHONY: all lib test clean docs linecount lint lintclean tools examples $(DIST_ALIASES) \
|
||||
py mat py$(PROJECT) mat$(PROJECT) proto runtest \
|
||||
superclean supercleanlist supercleanfiles warn everything
|
||||
|
||||
all: lib tools examples
|
||||
|
||||
lib: $(STATIC_NAME) $(DYNAMIC_NAME)
|
||||
|
||||
everything: $(EVERYTHING_TARGETS)
|
||||
|
||||
linecount:
|
||||
cloc --read-lang-def=$(PROJECT).cloc \
|
||||
src/$(PROJECT) include/$(PROJECT) tools examples \
|
||||
python matlab
|
||||
|
||||
lint: $(EMPTY_LINT_REPORT)
|
||||
|
||||
lintclean:
|
||||
@ $(RM) -r $(LINT_OUTPUT_DIR) $(EMPTY_LINT_REPORT) $(NONEMPTY_LINT_REPORT)
|
||||
|
||||
docs: $(DOXYGEN_OUTPUT_DIR)
|
||||
@ cd ./docs ; ln -sfn ../$(DOXYGEN_OUTPUT_DIR)/html doxygen
|
||||
|
||||
$(DOXYGEN_OUTPUT_DIR): $(DOXYGEN_CONFIG_FILE) $(DOXYGEN_SOURCES)
|
||||
$(DOXYGEN_COMMAND) $(DOXYGEN_CONFIG_FILE)
|
||||
|
||||
$(EMPTY_LINT_REPORT): $(LINT_OUTPUTS) | $(BUILD_DIR)
|
||||
@ cat $(LINT_OUTPUTS) > $@
|
||||
@ if [ -s "$@" ]; then \
|
||||
cat $@; \
|
||||
mv $@ $(NONEMPTY_LINT_REPORT); \
|
||||
echo "Found one or more lint errors."; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
$(RM) $(NONEMPTY_LINT_REPORT); \
|
||||
echo "No lint errors!";
|
||||
|
||||
$(LINT_OUTPUTS): $(LINT_OUTPUT_DIR)/%.lint.txt : % $(LINT_SCRIPT) | $(LINT_OUTPUT_DIR)
|
||||
@ mkdir -p $(dir $@)
|
||||
@ python $(LINT_SCRIPT) $< 2>&1 \
|
||||
| grep -v "^Done processing " \
|
||||
| grep -v "^Total errors found: 0" \
|
||||
> $@ \
|
||||
|| true
|
||||
|
||||
test: $(TEST_ALL_BIN) $(TEST_ALL_DYNLINK_BIN) $(TEST_BINS)
|
||||
|
||||
tools: $(TOOL_BINS) $(TOOL_BIN_LINKS)
|
||||
|
||||
examples: $(EXAMPLE_BINS)
|
||||
|
||||
py$(PROJECT): py
|
||||
|
||||
py: $(PY$(PROJECT)_SO) $(PROTO_GEN_PY)
|
||||
|
||||
$(PY$(PROJECT)_SO): $(PY$(PROJECT)_SRC) $(PY$(PROJECT)_HXX) | $(DYNAMIC_NAME)
|
||||
@ echo CXX/LD -o $@ $<
|
||||
$(Q)$(CXX) -shared -o $@ $(PY$(PROJECT)_SRC) \
|
||||
-o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(PYTHON_LDFLAGS) \
|
||||
-Wl,-rpath,$(ORIGIN)/../../build/lib
|
||||
|
||||
mat$(PROJECT): mat
|
||||
|
||||
mat: $(MAT$(PROJECT)_SO)
|
||||
|
||||
$(MAT$(PROJECT)_SO): $(MAT$(PROJECT)_SRC) $(STATIC_NAME)
|
||||
@ if [ -z "$(MATLAB_DIR)" ]; then \
|
||||
echo "MATLAB_DIR must be specified in $(CONFIG_FILE)" \
|
||||
"to build mat$(PROJECT)."; \
|
||||
exit 1; \
|
||||
fi
|
||||
@ echo MEX $<
|
||||
$(Q)$(MATLAB_DIR)/bin/mex $(MAT$(PROJECT)_SRC) \
|
||||
CXX="$(CXX)" \
|
||||
CXXFLAGS="\$$CXXFLAGS $(MATLAB_CXXFLAGS)" \
|
||||
CXXLIBS="\$$CXXLIBS $(STATIC_LINK_COMMAND) $(LDFLAGS)" -output $@
|
||||
@ if [ -f "$(PROJECT)_.d" ]; then \
|
||||
mv -f $(PROJECT)_.d $(BUILD_DIR)/${MAT$(PROJECT)_SO:.$(MAT_SO_EXT)=.d}; \
|
||||
fi
|
||||
|
||||
runtest: $(TEST_ALL_BIN)
|
||||
$(TOOL_BUILD_DIR)/caffe
|
||||
$(TEST_ALL_BIN) $(TEST_GPUID) --gtest_shuffle $(TEST_FILTER)
|
||||
|
||||
pytest: py
|
||||
cd python; python -m unittest discover -s caffe/test
|
||||
|
||||
mattest: mat
|
||||
cd matlab; $(MATLAB_DIR)/bin/matlab -nodisplay -r 'caffe.run_tests(), exit()'
|
||||
|
||||
warn: $(EMPTY_WARN_REPORT)
|
||||
|
||||
$(EMPTY_WARN_REPORT): $(ALL_WARNS) | $(BUILD_DIR)
|
||||
@ cat $(ALL_WARNS) > $@
|
||||
@ if [ -s "$@" ]; then \
|
||||
cat $@; \
|
||||
mv $@ $(NONEMPTY_WARN_REPORT); \
|
||||
echo "Compiler produced one or more warnings."; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
$(RM) $(NONEMPTY_WARN_REPORT); \
|
||||
echo "No compiler warnings!";
|
||||
|
||||
$(ALL_WARNS): %.o.$(WARNS_EXT) : %.o
|
||||
|
||||
$(BUILD_DIR_LINK): $(BUILD_DIR)/.linked
|
||||
|
||||
# Create a target ".linked" in this BUILD_DIR to tell Make that the "build" link
|
||||
# is currently correct, then delete the one in the OTHER_BUILD_DIR in case it
|
||||
# exists and $(DEBUG) is toggled later.
|
||||
$(BUILD_DIR)/.linked:
|
||||
@ mkdir -p $(BUILD_DIR)
|
||||
@ $(RM) $(OTHER_BUILD_DIR)/.linked
|
||||
@ $(RM) -r $(BUILD_DIR_LINK)
|
||||
@ ln -s $(BUILD_DIR) $(BUILD_DIR_LINK)
|
||||
@ touch $@
|
||||
|
||||
$(ALL_BUILD_DIRS): | $(BUILD_DIR_LINK)
|
||||
@ mkdir -p $@
|
||||
|
||||
$(DYNAMIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
|
||||
@ echo LD -o $@
|
||||
$(Q)$(CXX) -shared -o $@ $(OBJS) $(VERSIONFLAGS) $(LINKFLAGS) $(LDFLAGS)
|
||||
@ cd $(BUILD_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT); ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)
|
||||
|
||||
$(STATIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
|
||||
@ echo AR -o $@
|
||||
$(Q)ar rcs $@ $(OBJS)
|
||||
|
||||
$(BUILD_DIR)/%.o: %.cpp | $(ALL_BUILD_DIRS)
|
||||
@ echo CXX $<
|
||||
$(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \
|
||||
|| (cat $@.$(WARNS_EXT); exit 1)
|
||||
@ cat $@.$(WARNS_EXT)
|
||||
|
||||
$(PROTO_BUILD_DIR)/%.pb.o: $(PROTO_BUILD_DIR)/%.pb.cc $(PROTO_GEN_HEADER) \
|
||||
| $(PROTO_BUILD_DIR)
|
||||
@ echo CXX $<
|
||||
$(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \
|
||||
|| (cat $@.$(WARNS_EXT); exit 1)
|
||||
@ cat $@.$(WARNS_EXT)
|
||||
|
||||
$(BUILD_DIR)/cuda/%.o: %.cu | $(ALL_BUILD_DIRS)
|
||||
@ echo NVCC $<
|
||||
$(Q)$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -M $< -o ${@:.o=.d} \
|
||||
-odir $(@D)
|
||||
$(Q)$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -c $< -o $@ 2> $@.$(WARNS_EXT) \
|
||||
|| (cat $@.$(WARNS_EXT); exit 1)
|
||||
@ cat $@.$(WARNS_EXT)
|
||||
|
||||
$(TEST_ALL_BIN): $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) \
|
||||
| $(DYNAMIC_NAME) $(TEST_BIN_DIR)
|
||||
@ echo CXX/LD -o $@ $<
|
||||
$(Q)$(CXX) $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) \
|
||||
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
|
||||
|
||||
$(TEST_CU_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CU_BUILD_DIR)/%.o \
|
||||
$(GTEST_OBJ) | $(DYNAMIC_NAME) $(TEST_BIN_DIR)
|
||||
@ echo LD $<
|
||||
$(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) \
|
||||
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
|
||||
|
||||
$(TEST_CXX_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CXX_BUILD_DIR)/%.o \
|
||||
$(GTEST_OBJ) | $(DYNAMIC_NAME) $(TEST_BIN_DIR)
|
||||
@ echo LD $<
|
||||
$(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) \
|
||||
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
|
||||
|
||||
# Target for extension-less symlinks to tool binaries with extension '*.bin'.
|
||||
$(TOOL_BUILD_DIR)/%: $(TOOL_BUILD_DIR)/%.bin | $(TOOL_BUILD_DIR)
|
||||
@ $(RM) $@
|
||||
@ ln -s $(notdir $<) $@
|
||||
|
||||
$(TOOL_BINS): %.bin : %.o | $(DYNAMIC_NAME)
|
||||
@ echo CXX/LD -o $@
|
||||
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) \
|
||||
-Wl,-rpath,$(ORIGIN)/../lib
|
||||
|
||||
$(EXAMPLE_BINS): %.bin : %.o | $(DYNAMIC_NAME)
|
||||
@ echo CXX/LD -o $@
|
||||
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) \
|
||||
-Wl,-rpath,$(ORIGIN)/../../lib
|
||||
|
||||
proto: $(PROTO_GEN_CC) $(PROTO_GEN_HEADER)
|
||||
|
||||
$(PROTO_BUILD_DIR)/%.pb.cc $(PROTO_BUILD_DIR)/%.pb.h : \
|
||||
$(PROTO_SRC_DIR)/%.proto | $(PROTO_BUILD_DIR)
|
||||
@ echo PROTOC $<
|
||||
$(Q)protoc --proto_path=$(PROTO_SRC_DIR) --cpp_out=$(PROTO_BUILD_DIR) $<
|
||||
|
||||
$(PY_PROTO_BUILD_DIR)/%_pb2.py : $(PROTO_SRC_DIR)/%.proto \
|
||||
$(PY_PROTO_INIT) | $(PY_PROTO_BUILD_DIR)
|
||||
@ echo PROTOC \(python\) $<
|
||||
$(Q)protoc --proto_path=$(PROTO_SRC_DIR) --python_out=$(PY_PROTO_BUILD_DIR) $<
|
||||
|
||||
$(PY_PROTO_INIT): | $(PY_PROTO_BUILD_DIR)
|
||||
touch $(PY_PROTO_INIT)
|
||||
|
||||
clean:
|
||||
@- $(RM) -rf $(ALL_BUILD_DIRS)
|
||||
@- $(RM) -rf $(OTHER_BUILD_DIR)
|
||||
@- $(RM) -rf $(BUILD_DIR_LINK)
|
||||
@- $(RM) -rf $(DISTRIBUTE_DIR)
|
||||
@- $(RM) $(PY$(PROJECT)_SO)
|
||||
@- $(RM) $(MAT$(PROJECT)_SO)
|
||||
|
||||
supercleanfiles:
|
||||
$(eval SUPERCLEAN_FILES := $(strip \
|
||||
$(foreach ext,$(SUPERCLEAN_EXTS), $(shell find . -name '*$(ext)' \
|
||||
-not -path './data/*'))))
|
||||
|
||||
supercleanlist: supercleanfiles
|
||||
@ \
|
||||
if [ -z "$(SUPERCLEAN_FILES)" ]; then \
|
||||
echo "No generated files found."; \
|
||||
else \
|
||||
echo $(SUPERCLEAN_FILES) | tr ' ' '\n'; \
|
||||
fi
|
||||
|
||||
superclean: clean supercleanfiles
|
||||
@ \
|
||||
if [ -z "$(SUPERCLEAN_FILES)" ]; then \
|
||||
echo "No generated files found."; \
|
||||
else \
|
||||
echo "Deleting the following generated files:"; \
|
||||
echo $(SUPERCLEAN_FILES) | tr ' ' '\n'; \
|
||||
$(RM) $(SUPERCLEAN_FILES); \
|
||||
fi
|
||||
|
||||
$(DIST_ALIASES): $(DISTRIBUTE_DIR)
|
||||
|
||||
$(DISTRIBUTE_DIR): all py | $(DISTRIBUTE_SUBDIRS)
|
||||
# add proto
|
||||
cp -r src/caffe/proto $(DISTRIBUTE_DIR)/
|
||||
# add include
|
||||
cp -r include $(DISTRIBUTE_DIR)/
|
||||
mkdir -p $(DISTRIBUTE_DIR)/include/caffe/proto
|
||||
cp $(PROTO_GEN_HEADER_SRCS) $(DISTRIBUTE_DIR)/include/caffe/proto
|
||||
# add tool and example binaries
|
||||
cp $(TOOL_BINS) $(DISTRIBUTE_DIR)/bin
|
||||
cp $(EXAMPLE_BINS) $(DISTRIBUTE_DIR)/bin
|
||||
# add libraries
|
||||
cp $(STATIC_NAME) $(DISTRIBUTE_DIR)/lib
|
||||
install -m 644 $(DYNAMIC_NAME) $(DISTRIBUTE_DIR)/lib
|
||||
cd $(DISTRIBUTE_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT); ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)
|
||||
# add python - it's not the standard way, indeed...
|
||||
cp -r python $(DISTRIBUTE_DIR)/python
|
||||
|
||||
-include $(DEPS)
|
|
@ -0,0 +1,120 @@
|
|||
## Refer to http://caffe.berkeleyvision.org/installation.html
|
||||
# Contributions simplifying and improving our build system are welcome!
|
||||
|
||||
# cuDNN acceleration switch (uncomment to build with cuDNN).
|
||||
# USE_CUDNN := 1
|
||||
|
||||
# CPU-only switch (uncomment to build without GPU support).
|
||||
# CPU_ONLY := 1
|
||||
|
||||
# uncomment to disable IO dependencies and corresponding data layers
|
||||
# USE_OPENCV := 0
|
||||
# USE_LEVELDB := 0
|
||||
# USE_LMDB := 0
|
||||
|
||||
# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
|
||||
# You should not set this flag if you will be reading LMDBs with any
|
||||
# possibility of simultaneous read and write
|
||||
# ALLOW_LMDB_NOLOCK := 1
|
||||
|
||||
# Uncomment if you're using OpenCV 3
|
||||
# OPENCV_VERSION := 3
|
||||
|
||||
# To customize your choice of compiler, uncomment and set the following.
|
||||
# N.B. the default for Linux is g++ and the default for OSX is clang++
|
||||
# CUSTOM_CXX := g++
|
||||
|
||||
# CUDA directory contains bin/ and lib/ directories that we need.
|
||||
CUDA_DIR := /usr/local/cuda
|
||||
# On Ubuntu 14.04, if cuda tools are installed via
|
||||
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
|
||||
# CUDA_DIR := /usr
|
||||
|
||||
# CUDA architecture setting: going with all of them.
|
||||
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
|
||||
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
|
||||
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
|
||||
-gencode arch=compute_20,code=sm_21 \
|
||||
-gencode arch=compute_30,code=sm_30 \
|
||||
-gencode arch=compute_35,code=sm_35 \
|
||||
-gencode arch=compute_50,code=sm_50 \
|
||||
-gencode arch=compute_52,code=sm_52 \
|
||||
-gencode arch=compute_60,code=sm_60 \
|
||||
-gencode arch=compute_61,code=sm_61 \
|
||||
-gencode arch=compute_61,code=compute_61
|
||||
|
||||
# BLAS choice:
|
||||
# atlas for ATLAS (default)
|
||||
# mkl for MKL
|
||||
# open for OpenBlas
|
||||
BLAS := atlas
|
||||
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
|
||||
# Leave commented to accept the defaults for your choice of BLAS
|
||||
# (which should work)!
|
||||
# BLAS_INCLUDE := /path/to/your/blas
|
||||
# BLAS_LIB := /path/to/your/blas
|
||||
|
||||
# Homebrew puts openblas in a directory that is not on the standard search path
|
||||
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
|
||||
# BLAS_LIB := $(shell brew --prefix openblas)/lib
|
||||
|
||||
# This is required only if you will compile the matlab interface.
|
||||
# MATLAB directory should contain the mex binary in /bin.
|
||||
# MATLAB_DIR := /usr/local
|
||||
# MATLAB_DIR := /Applications/MATLAB_R2012b.app
|
||||
|
||||
# NOTE: this is required only if you will compile the python interface.
|
||||
# We need to be able to find Python.h and numpy/arrayobject.h.
|
||||
PYTHON_INCLUDE := /usr/include/python2.7 \
|
||||
/usr/lib/python2.7/dist-packages/numpy/core/include
|
||||
# Anaconda Python distribution is quite popular. Include path:
|
||||
# Verify anaconda location, sometimes it's in root.
|
||||
# ANACONDA_HOME := $(HOME)/anaconda
|
||||
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
|
||||
# $(ANACONDA_HOME)/include/python2.7 \
|
||||
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include
|
||||
|
||||
# Uncomment to use Python 3 (default is Python 2)
|
||||
# PYTHON_LIBRARIES := boost_python3 python3.5m
|
||||
# PYTHON_INCLUDE := /usr/include/python3.5m \
|
||||
# /usr/lib/python3.5/dist-packages/numpy/core/include
|
||||
|
||||
# We need to be able to find libpythonX.X.so or .dylib.
|
||||
PYTHON_LIB := /usr/lib
|
||||
# PYTHON_LIB := $(ANACONDA_HOME)/lib
|
||||
|
||||
# Homebrew installs numpy in a non standard path (keg only)
|
||||
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
|
||||
# PYTHON_LIB += $(shell brew --prefix numpy)/lib
|
||||
|
||||
# Uncomment to support layers written in Python (will link against Python libs)
|
||||
# WITH_PYTHON_LAYER := 1
|
||||
|
||||
# Whatever else you find you need goes here.
|
||||
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
|
||||
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
|
||||
|
||||
# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
|
||||
# INCLUDE_DIRS += $(shell brew --prefix)/include
|
||||
# LIBRARY_DIRS += $(shell brew --prefix)/lib
|
||||
|
||||
# NCCL acceleration switch (uncomment to build with NCCL)
|
||||
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
|
||||
# USE_NCCL := 1
|
||||
|
||||
# Uncomment to use `pkg-config` to specify OpenCV library paths.
|
||||
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
|
||||
# USE_PKG_CONFIG := 1
|
||||
|
||||
# N.B. both build and distribute dirs are cleared on `make clean`
|
||||
BUILD_DIR := build
|
||||
DISTRIBUTE_DIR := distribute
|
||||
|
||||
# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
|
||||
# DEBUG := 1
|
||||
|
||||
# The ID of the GPU that 'make runtest' will use to run unit tests.
|
||||
TEST_GPUID := 0
|
||||
|
||||
# enable pretty build (comment to see full commands)
|
||||
Q ?= @
|
|
@ -0,0 +1,37 @@
|
|||
# Caffe
|
||||
|
||||
[](https://travis-ci.org/BVLC/caffe)
|
||||
[](LICENSE)
|
||||
|
||||
Caffe is a deep learning framework made with expression, speed, and modularity in mind.
|
||||
It is developed by the Berkeley Vision and Learning Center ([BVLC](http://bvlc.eecs.berkeley.edu)) and community contributors.
|
||||
|
||||
Check out the [project site](http://caffe.berkeleyvision.org) for all the details like
|
||||
|
||||
- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p)
|
||||
- [Tutorial Documentation](http://caffe.berkeleyvision.org/tutorial/)
|
||||
- [BVLC reference models](http://caffe.berkeleyvision.org/model_zoo.html) and the [community model zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo)
|
||||
- [Installation instructions](http://caffe.berkeleyvision.org/installation.html)
|
||||
|
||||
and step-by-step examples.
|
||||
|
||||
[](https://gitter.im/BVLC/caffe?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
|
||||
Please join the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) or [gitter chat](https://gitter.im/BVLC/caffe) to ask questions and talk about methods and models.
|
||||
Framework development discussions and thorough bug reports are collected on [Issues](https://github.com/BVLC/caffe/issues).
|
||||
|
||||
Happy brewing!
|
||||
|
||||
## License and Citation
|
||||
|
||||
Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE).
|
||||
The BVLC reference models are released for unrestricted use.
|
||||
|
||||
Please cite Caffe in your publications if it helps your research:
|
||||
|
||||
@article{jia2014caffe,
|
||||
Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
|
||||
Journal = {arXiv preprint arXiv:1408.5093},
|
||||
Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
|
||||
Year = {2014}
|
||||
}
|
|
@ -0,0 +1,53 @@
|
|||
Bourne Shell
|
||||
filter remove_matches ^\s*#
|
||||
filter remove_inline #.*$
|
||||
extension sh
|
||||
script_exe sh
|
||||
C
|
||||
filter remove_matches ^\s*//
|
||||
filter call_regexp_common C
|
||||
filter remove_inline //.*$
|
||||
extension c
|
||||
extension ec
|
||||
extension pgc
|
||||
C++
|
||||
filter remove_matches ^\s*//
|
||||
filter remove_inline //.*$
|
||||
filter call_regexp_common C
|
||||
extension C
|
||||
extension cc
|
||||
extension cpp
|
||||
extension cxx
|
||||
extension pcc
|
||||
C/C++ Header
|
||||
filter remove_matches ^\s*//
|
||||
filter call_regexp_common C
|
||||
filter remove_inline //.*$
|
||||
extension H
|
||||
extension h
|
||||
extension hh
|
||||
extension hpp
|
||||
CUDA
|
||||
filter remove_matches ^\s*//
|
||||
filter remove_inline //.*$
|
||||
filter call_regexp_common C
|
||||
extension cu
|
||||
Python
|
||||
filter remove_matches ^\s*#
|
||||
filter docstring_to_C
|
||||
filter call_regexp_common C
|
||||
filter remove_inline #.*$
|
||||
extension py
|
||||
make
|
||||
filter remove_matches ^\s*#
|
||||
filter remove_inline #.*$
|
||||
extension Gnumakefile
|
||||
extension Makefile
|
||||
extension am
|
||||
extension gnumakefile
|
||||
extension makefile
|
||||
filename Gnumakefile
|
||||
filename Makefile
|
||||
filename gnumakefile
|
||||
filename makefile
|
||||
script_exe make
|
|
@ -0,0 +1,56 @@
|
|||
|
||||
################################################################################################
|
||||
# Helper function to get all list items that begin with given prefix
|
||||
# Usage:
|
||||
# caffe_get_items_with_prefix(<prefix> <list_variable> <output_variable>)
|
||||
function(caffe_get_items_with_prefix prefix list_variable output_variable)
|
||||
set(__result "")
|
||||
foreach(__e ${${list_variable}})
|
||||
if(__e MATCHES "^${prefix}.*")
|
||||
list(APPEND __result ${__e})
|
||||
endif()
|
||||
endforeach()
|
||||
set(${output_variable} ${__result} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Function for generation Caffe build- and install- tree export config files
|
||||
# Usage:
|
||||
# caffe_generate_export_configs()
|
||||
function(caffe_generate_export_configs)
|
||||
set(install_cmake_suffix "share/Caffe")
|
||||
|
||||
if(NOT HAVE_CUDA)
|
||||
set(HAVE_CUDA FALSE)
|
||||
endif()
|
||||
|
||||
if(NOT HAVE_CUDNN)
|
||||
set(HAVE_CUDNN FALSE)
|
||||
endif()
|
||||
|
||||
# ---[ Configure build-tree CaffeConfig.cmake file ]---
|
||||
|
||||
configure_file("cmake/Templates/CaffeConfig.cmake.in" "${PROJECT_BINARY_DIR}/CaffeConfig.cmake" @ONLY)
|
||||
|
||||
# Add targets to the build-tree export set
|
||||
export(TARGETS caffe proto FILE "${PROJECT_BINARY_DIR}/CaffeTargets.cmake")
|
||||
export(PACKAGE Caffe)
|
||||
|
||||
# ---[ Configure install-tree CaffeConfig.cmake file ]---
|
||||
|
||||
configure_file("cmake/Templates/CaffeConfig.cmake.in" "${PROJECT_BINARY_DIR}/cmake/CaffeConfig.cmake" @ONLY)
|
||||
|
||||
# Install the CaffeConfig.cmake and export set to use with install-tree
|
||||
install(FILES "${PROJECT_BINARY_DIR}/cmake/CaffeConfig.cmake" DESTINATION ${install_cmake_suffix})
|
||||
install(EXPORT CaffeTargets DESTINATION ${install_cmake_suffix})
|
||||
|
||||
# ---[ Configure and install version file ]---
|
||||
|
||||
# TODO: Lines below are commented because Caffe doesn't declare its version in headers.
|
||||
# When the declarations are added, modify `caffe_extract_caffe_version()` macro and uncomment
|
||||
|
||||
# configure_file(cmake/Templates/CaffeConfigVersion.cmake.in "${PROJECT_BINARY_DIR}/CaffeConfigVersion.cmake" @ONLY)
|
||||
# install(FILES "${PROJECT_BINARY_DIR}/CaffeConfigVersion.cmake" DESTINATION ${install_cmake_suffix})
|
||||
endfunction()
|
||||
|
||||
|
|
@ -0,0 +1,292 @@
|
|||
if(CPU_ONLY)
|
||||
return()
|
||||
endif()
|
||||
|
||||
# Known NVIDIA GPU achitectures Caffe can be compiled for.
|
||||
# This list will be used for CUDA_ARCH_NAME = All option
|
||||
set(Caffe_known_gpu_archs "20 21(20) 30 35 50 60 61")
|
||||
|
||||
################################################################################################
|
||||
# A function for automatic detection of GPUs installed (if autodetection is enabled)
|
||||
# Usage:
|
||||
# caffe_detect_installed_gpus(out_variable)
|
||||
function(caffe_detect_installed_gpus out_variable)
|
||||
if(NOT CUDA_gpu_detect_output)
|
||||
set(__cufile ${PROJECT_BINARY_DIR}/detect_cuda_archs.cu)
|
||||
|
||||
file(WRITE ${__cufile} ""
|
||||
"#include <cstdio>\n"
|
||||
"int main()\n"
|
||||
"{\n"
|
||||
" int count = 0;\n"
|
||||
" if (cudaSuccess != cudaGetDeviceCount(&count)) return -1;\n"
|
||||
" if (count == 0) return -1;\n"
|
||||
" for (int device = 0; device < count; ++device)\n"
|
||||
" {\n"
|
||||
" cudaDeviceProp prop;\n"
|
||||
" if (cudaSuccess == cudaGetDeviceProperties(&prop, device))\n"
|
||||
" std::printf(\"%d.%d \", prop.major, prop.minor);\n"
|
||||
" }\n"
|
||||
" return 0;\n"
|
||||
"}\n")
|
||||
|
||||
execute_process(COMMAND "${CUDA_NVCC_EXECUTABLE}" "--run" "${__cufile}"
|
||||
WORKING_DIRECTORY "${PROJECT_BINARY_DIR}/CMakeFiles/"
|
||||
RESULT_VARIABLE __nvcc_res OUTPUT_VARIABLE __nvcc_out
|
||||
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
|
||||
if(__nvcc_res EQUAL 0)
|
||||
string(REPLACE "2.1" "2.1(2.0)" __nvcc_out "${__nvcc_out}")
|
||||
set(CUDA_gpu_detect_output ${__nvcc_out} CACHE INTERNAL "Returned GPU architetures from caffe_detect_gpus tool" FORCE)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(NOT CUDA_gpu_detect_output)
|
||||
message(STATUS "Automatic GPU detection failed. Building for all known architectures.")
|
||||
set(${out_variable} ${Caffe_known_gpu_archs} PARENT_SCOPE)
|
||||
else()
|
||||
set(${out_variable} ${CUDA_gpu_detect_output} PARENT_SCOPE)
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
|
||||
################################################################################################
|
||||
# Function for selecting GPU arch flags for nvcc based on CUDA_ARCH_NAME
|
||||
# Usage:
|
||||
# caffe_select_nvcc_arch_flags(out_variable)
|
||||
function(caffe_select_nvcc_arch_flags out_variable)
|
||||
# List of arch names
|
||||
set(__archs_names "Fermi" "Kepler" "Maxwell" "Pascal" "All" "Manual")
|
||||
set(__archs_name_default "All")
|
||||
if(NOT CMAKE_CROSSCOMPILING)
|
||||
list(APPEND __archs_names "Auto")
|
||||
set(__archs_name_default "Auto")
|
||||
endif()
|
||||
|
||||
# set CUDA_ARCH_NAME strings (so it will be seen as dropbox in CMake-Gui)
|
||||
set(CUDA_ARCH_NAME ${__archs_name_default} CACHE STRING "Select target NVIDIA GPU achitecture.")
|
||||
set_property( CACHE CUDA_ARCH_NAME PROPERTY STRINGS "" ${__archs_names} )
|
||||
mark_as_advanced(CUDA_ARCH_NAME)
|
||||
|
||||
# verify CUDA_ARCH_NAME value
|
||||
if(NOT ";${__archs_names};" MATCHES ";${CUDA_ARCH_NAME};")
|
||||
string(REPLACE ";" ", " __archs_names "${__archs_names}")
|
||||
message(FATAL_ERROR "Only ${__archs_names} architeture names are supported.")
|
||||
endif()
|
||||
|
||||
if(${CUDA_ARCH_NAME} STREQUAL "Manual")
|
||||
set(CUDA_ARCH_BIN ${Caffe_known_gpu_archs} CACHE STRING "Specify 'real' GPU architectures to build binaries for, BIN(PTX) format is supported")
|
||||
set(CUDA_ARCH_PTX "50" CACHE STRING "Specify 'virtual' PTX architectures to build PTX intermediate code for")
|
||||
mark_as_advanced(CUDA_ARCH_BIN CUDA_ARCH_PTX)
|
||||
else()
|
||||
unset(CUDA_ARCH_BIN CACHE)
|
||||
unset(CUDA_ARCH_PTX CACHE)
|
||||
endif()
|
||||
|
||||
if(${CUDA_ARCH_NAME} STREQUAL "Fermi")
|
||||
set(__cuda_arch_bin "20 21(20)")
|
||||
elseif(${CUDA_ARCH_NAME} STREQUAL "Kepler")
|
||||
set(__cuda_arch_bin "30 35")
|
||||
elseif(${CUDA_ARCH_NAME} STREQUAL "Maxwell")
|
||||
set(__cuda_arch_bin "50")
|
||||
elseif(${CUDA_ARCH_NAME} STREQUAL "Pascal")
|
||||
set(__cuda_arch_bin "60 61")
|
||||
elseif(${CUDA_ARCH_NAME} STREQUAL "All")
|
||||
set(__cuda_arch_bin ${Caffe_known_gpu_archs})
|
||||
elseif(${CUDA_ARCH_NAME} STREQUAL "Auto")
|
||||
caffe_detect_installed_gpus(__cuda_arch_bin)
|
||||
else() # (${CUDA_ARCH_NAME} STREQUAL "Manual")
|
||||
set(__cuda_arch_bin ${CUDA_ARCH_BIN})
|
||||
endif()
|
||||
|
||||
# remove dots and convert to lists
|
||||
string(REGEX REPLACE "\\." "" __cuda_arch_bin "${__cuda_arch_bin}")
|
||||
string(REGEX REPLACE "\\." "" __cuda_arch_ptx "${CUDA_ARCH_PTX}")
|
||||
string(REGEX MATCHALL "[0-9()]+" __cuda_arch_bin "${__cuda_arch_bin}")
|
||||
string(REGEX MATCHALL "[0-9]+" __cuda_arch_ptx "${__cuda_arch_ptx}")
|
||||
caffe_list_unique(__cuda_arch_bin __cuda_arch_ptx)
|
||||
|
||||
set(__nvcc_flags "")
|
||||
set(__nvcc_archs_readable "")
|
||||
|
||||
# Tell NVCC to add binaries for the specified GPUs
|
||||
foreach(__arch ${__cuda_arch_bin})
|
||||
if(__arch MATCHES "([0-9]+)\\(([0-9]+)\\)")
|
||||
# User explicitly specified PTX for the concrete BIN
|
||||
list(APPEND __nvcc_flags -gencode arch=compute_${CMAKE_MATCH_2},code=sm_${CMAKE_MATCH_1})
|
||||
list(APPEND __nvcc_archs_readable sm_${CMAKE_MATCH_1})
|
||||
else()
|
||||
# User didn't explicitly specify PTX for the concrete BIN, we assume PTX=BIN
|
||||
list(APPEND __nvcc_flags -gencode arch=compute_${__arch},code=sm_${__arch})
|
||||
list(APPEND __nvcc_archs_readable sm_${__arch})
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
# Tell NVCC to add PTX intermediate code for the specified architectures
|
||||
foreach(__arch ${__cuda_arch_ptx})
|
||||
list(APPEND __nvcc_flags -gencode arch=compute_${__arch},code=compute_${__arch})
|
||||
list(APPEND __nvcc_archs_readable compute_${__arch})
|
||||
endforeach()
|
||||
|
||||
string(REPLACE ";" " " __nvcc_archs_readable "${__nvcc_archs_readable}")
|
||||
set(${out_variable} ${__nvcc_flags} PARENT_SCOPE)
|
||||
set(${out_variable}_readable ${__nvcc_archs_readable} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Short command for cuda compilation
|
||||
# Usage:
|
||||
# caffe_cuda_compile(<objlist_variable> <cuda_files>)
|
||||
macro(caffe_cuda_compile objlist_variable)
|
||||
foreach(var CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG)
|
||||
set(${var}_backup_in_cuda_compile_ "${${var}}")
|
||||
|
||||
# we remove /EHa as it generates warnings under windows
|
||||
string(REPLACE "/EHa" "" ${var} "${${var}}")
|
||||
|
||||
endforeach()
|
||||
|
||||
if(UNIX OR APPLE)
|
||||
list(APPEND CUDA_NVCC_FLAGS -Xcompiler -fPIC)
|
||||
endif()
|
||||
|
||||
if(APPLE)
|
||||
list(APPEND CUDA_NVCC_FLAGS -Xcompiler -Wno-unused-function)
|
||||
endif()
|
||||
|
||||
cuda_compile(cuda_objcs ${ARGN})
|
||||
|
||||
foreach(var CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS_DEBUG)
|
||||
set(${var} "${${var}_backup_in_cuda_compile_}")
|
||||
unset(${var}_backup_in_cuda_compile_)
|
||||
endforeach()
|
||||
|
||||
set(${objlist_variable} ${cuda_objcs})
|
||||
endmacro()
|
||||
|
||||
################################################################################################
|
||||
# Short command for cuDNN detection. Believe it soon will be a part of CUDA toolkit distribution.
|
||||
# That's why not FindcuDNN.cmake file, but just the macro
|
||||
# Usage:
|
||||
# detect_cuDNN()
|
||||
function(detect_cuDNN)
|
||||
set(CUDNN_ROOT "" CACHE PATH "CUDNN root folder")
|
||||
|
||||
find_path(CUDNN_INCLUDE cudnn.h
|
||||
PATHS ${CUDNN_ROOT} $ENV{CUDNN_ROOT} ${CUDA_TOOLKIT_INCLUDE}
|
||||
DOC "Path to cuDNN include directory." )
|
||||
|
||||
# dynamic libs have different suffix in mac and linux
|
||||
if(APPLE)
|
||||
set(CUDNN_LIB_NAME "libcudnn.dylib")
|
||||
else()
|
||||
set(CUDNN_LIB_NAME "libcudnn.so")
|
||||
endif()
|
||||
|
||||
get_filename_component(__libpath_hist ${CUDA_CUDART_LIBRARY} PATH)
|
||||
find_library(CUDNN_LIBRARY NAMES ${CUDNN_LIB_NAME}
|
||||
PATHS ${CUDNN_ROOT} $ENV{CUDNN_ROOT} ${CUDNN_INCLUDE} ${__libpath_hist} ${__libpath_hist}/../lib
|
||||
DOC "Path to cuDNN library.")
|
||||
|
||||
if(CUDNN_INCLUDE AND CUDNN_LIBRARY)
|
||||
set(HAVE_CUDNN TRUE PARENT_SCOPE)
|
||||
set(CUDNN_FOUND TRUE PARENT_SCOPE)
|
||||
|
||||
file(READ ${CUDNN_INCLUDE}/cudnn.h CUDNN_VERSION_FILE_CONTENTS)
|
||||
|
||||
# cuDNN v3 and beyond
|
||||
string(REGEX MATCH "define CUDNN_MAJOR * +([0-9]+)"
|
||||
CUDNN_VERSION_MAJOR "${CUDNN_VERSION_FILE_CONTENTS}")
|
||||
string(REGEX REPLACE "define CUDNN_MAJOR * +([0-9]+)" "\\1"
|
||||
CUDNN_VERSION_MAJOR "${CUDNN_VERSION_MAJOR}")
|
||||
string(REGEX MATCH "define CUDNN_MINOR * +([0-9]+)"
|
||||
CUDNN_VERSION_MINOR "${CUDNN_VERSION_FILE_CONTENTS}")
|
||||
string(REGEX REPLACE "define CUDNN_MINOR * +([0-9]+)" "\\1"
|
||||
CUDNN_VERSION_MINOR "${CUDNN_VERSION_MINOR}")
|
||||
string(REGEX MATCH "define CUDNN_PATCHLEVEL * +([0-9]+)"
|
||||
CUDNN_VERSION_PATCH "${CUDNN_VERSION_FILE_CONTENTS}")
|
||||
string(REGEX REPLACE "define CUDNN_PATCHLEVEL * +([0-9]+)" "\\1"
|
||||
CUDNN_VERSION_PATCH "${CUDNN_VERSION_PATCH}")
|
||||
|
||||
if(NOT CUDNN_VERSION_MAJOR)
|
||||
set(CUDNN_VERSION "???")
|
||||
else()
|
||||
set(CUDNN_VERSION "${CUDNN_VERSION_MAJOR}.${CUDNN_VERSION_MINOR}.${CUDNN_VERSION_PATCH}")
|
||||
endif()
|
||||
|
||||
message(STATUS "Found cuDNN: ver. ${CUDNN_VERSION} found (include: ${CUDNN_INCLUDE}, library: ${CUDNN_LIBRARY})")
|
||||
|
||||
string(COMPARE LESS "${CUDNN_VERSION_MAJOR}" 3 cuDNNVersionIncompatible)
|
||||
if(cuDNNVersionIncompatible)
|
||||
message(FATAL_ERROR "cuDNN version >3 is required.")
|
||||
endif()
|
||||
|
||||
set(CUDNN_VERSION "${CUDNN_VERSION}" PARENT_SCOPE)
|
||||
mark_as_advanced(CUDNN_INCLUDE CUDNN_LIBRARY CUDNN_ROOT)
|
||||
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
### Non macro section
|
||||
################################################################################################
|
||||
|
||||
find_package(CUDA 5.5 QUIET)
|
||||
find_cuda_helper_libs(curand) # cmake 2.8.7 compartibility which doesn't search for curand
|
||||
|
||||
if(NOT CUDA_FOUND)
|
||||
return()
|
||||
endif()
|
||||
|
||||
set(HAVE_CUDA TRUE)
|
||||
message(STATUS "CUDA detected: " ${CUDA_VERSION})
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${CUDA_INCLUDE_DIRS})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${CUDA_CUDART_LIBRARY}
|
||||
${CUDA_curand_LIBRARY} ${CUDA_CUBLAS_LIBRARIES})
|
||||
|
||||
# cudnn detection
|
||||
if(USE_CUDNN)
|
||||
detect_cuDNN()
|
||||
if(HAVE_CUDNN)
|
||||
list(APPEND Caffe_DEFINITIONS PUBLIC -DUSE_CUDNN)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${CUDNN_INCLUDE})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${CUDNN_LIBRARY})
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# setting nvcc arch flags
|
||||
caffe_select_nvcc_arch_flags(NVCC_FLAGS_EXTRA)
|
||||
list(APPEND CUDA_NVCC_FLAGS ${NVCC_FLAGS_EXTRA})
|
||||
message(STATUS "Added CUDA NVCC flags for: ${NVCC_FLAGS_EXTRA_readable}")
|
||||
|
||||
# Boost 1.55 workaround, see https://svn.boost.org/trac/boost/ticket/9392 or
|
||||
# https://github.com/ComputationalRadiationPhysics/picongpu/blob/master/src/picongpu/CMakeLists.txt
|
||||
if(Boost_VERSION EQUAL 105500)
|
||||
message(STATUS "Cuda + Boost 1.55: Applying noinline work around")
|
||||
# avoid warning for CMake >= 2.8.12
|
||||
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} \"-DBOOST_NOINLINE=__attribute__((noinline))\" ")
|
||||
endif()
|
||||
|
||||
# disable some nvcc diagnostic that apears in boost, glog, glags, opencv, etc.
|
||||
foreach(diag cc_clobber_ignored integer_sign_change useless_using_declaration set_but_not_used)
|
||||
list(APPEND CUDA_NVCC_FLAGS -Xcudafe --diag_suppress=${diag})
|
||||
endforeach()
|
||||
|
||||
# setting default testing device
|
||||
if(NOT CUDA_TEST_DEVICE)
|
||||
set(CUDA_TEST_DEVICE -1)
|
||||
endif()
|
||||
|
||||
mark_as_advanced(CUDA_BUILD_CUBIN CUDA_BUILD_EMULATION CUDA_VERBOSE_BUILD)
|
||||
mark_as_advanced(CUDA_SDK_ROOT_DIR CUDA_SEPARABLE_COMPILATION)
|
||||
|
||||
# Handle clang/libc++ issue
|
||||
if(APPLE)
|
||||
caffe_detect_darwin_version(OSX_VERSION)
|
||||
|
||||
# OSX 10.9 and higher uses clang/libc++ by default which is incompatible with old CUDA toolkits
|
||||
if(OSX_VERSION VERSION_GREATER 10.8)
|
||||
# enabled by default if and only if CUDA version is less than 7.0
|
||||
caffe_option(USE_libstdcpp "Use libstdc++ instead of libc++" (CUDA_VERSION VERSION_LESS 7.0))
|
||||
endif()
|
||||
endif()
|
|
@ -0,0 +1,203 @@
|
|||
# These lists are later turned into target properties on main caffe library target
|
||||
set(Caffe_LINKER_LIBS "")
|
||||
set(Caffe_INCLUDE_DIRS "")
|
||||
set(Caffe_DEFINITIONS "")
|
||||
set(Caffe_COMPILE_OPTIONS "")
|
||||
|
||||
# ---[ Boost
|
||||
find_package(Boost 1.46 REQUIRED COMPONENTS system thread filesystem)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${Boost_INCLUDE_DIRS})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${Boost_LIBRARIES})
|
||||
|
||||
# ---[ Threads
|
||||
find_package(Threads REQUIRED)
|
||||
list(APPEND Caffe_LINKER_LIBS PRIVATE ${CMAKE_THREAD_LIBS_INIT})
|
||||
|
||||
# ---[ OpenMP
|
||||
if(USE_OPENMP)
|
||||
# Ideally, this should be provided by the BLAS library IMPORTED target. However,
|
||||
# nobody does this, so we need to link to OpenMP explicitly and have the maintainer
|
||||
# to flick the switch manually as needed.
|
||||
#
|
||||
# Moreover, OpenMP package does not provide an IMPORTED target as well, and the
|
||||
# suggested way of linking to OpenMP is to append to CMAKE_{C,CXX}_FLAGS.
|
||||
# However, this naïve method will force any user of Caffe to add the same kludge
|
||||
# into their buildsystem again, so we put these options into per-target PUBLIC
|
||||
# compile options and link flags, so that they will be exported properly.
|
||||
find_package(OpenMP REQUIRED)
|
||||
list(APPEND Caffe_LINKER_LIBS PRIVATE ${OpenMP_CXX_FLAGS})
|
||||
list(APPEND Caffe_COMPILE_OPTIONS PRIVATE ${OpenMP_CXX_FLAGS})
|
||||
endif()
|
||||
|
||||
# ---[ Google-glog
|
||||
include("cmake/External/glog.cmake")
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${GLOG_INCLUDE_DIRS})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${GLOG_LIBRARIES})
|
||||
|
||||
# ---[ Google-gflags
|
||||
include("cmake/External/gflags.cmake")
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${GFLAGS_INCLUDE_DIRS})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${GFLAGS_LIBRARIES})
|
||||
|
||||
# ---[ Google-protobuf
|
||||
include(cmake/ProtoBuf.cmake)
|
||||
|
||||
# ---[ HDF5
|
||||
find_package(HDF5 COMPONENTS HL REQUIRED)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${HDF5_INCLUDE_DIRS})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${HDF5_LIBRARIES} ${HDF5_HL_LIBRARIES})
|
||||
|
||||
# ---[ LMDB
|
||||
if(USE_LMDB)
|
||||
find_package(LMDB REQUIRED)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${LMDB_INCLUDE_DIR})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${LMDB_LIBRARIES})
|
||||
list(APPEND Caffe_DEFINITIONS PUBLIC -DUSE_LMDB)
|
||||
if(ALLOW_LMDB_NOLOCK)
|
||||
list(APPEND Caffe_DEFINITIONS PRIVATE -DALLOW_LMDB_NOLOCK)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# ---[ LevelDB
|
||||
if(USE_LEVELDB)
|
||||
find_package(LevelDB REQUIRED)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${LevelDB_INCLUDES})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${LevelDB_LIBRARIES})
|
||||
list(APPEND Caffe_DEFINITIONS PUBLIC -DUSE_LEVELDB)
|
||||
endif()
|
||||
|
||||
# ---[ Snappy
|
||||
if(USE_LEVELDB)
|
||||
find_package(Snappy REQUIRED)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PRIVATE ${Snappy_INCLUDE_DIR})
|
||||
list(APPEND Caffe_LINKER_LIBS PRIVATE ${Snappy_LIBRARIES})
|
||||
endif()
|
||||
|
||||
# ---[ CUDA
|
||||
include(cmake/Cuda.cmake)
|
||||
if(NOT HAVE_CUDA)
|
||||
if(CPU_ONLY)
|
||||
message(STATUS "-- CUDA is disabled. Building without it...")
|
||||
else()
|
||||
message(WARNING "-- CUDA is not detected by cmake. Building without it...")
|
||||
endif()
|
||||
|
||||
list(APPEND Caffe_DEFINITIONS PUBLIC -DCPU_ONLY)
|
||||
endif()
|
||||
|
||||
if(USE_NCCL)
|
||||
find_package(NCCL REQUIRED)
|
||||
include_directories(SYSTEM ${NCCL_INCLUDE_DIR})
|
||||
list(APPEND Caffe_LINKER_LIBS ${NCCL_LIBRARIES})
|
||||
add_definitions(-DUSE_NCCL)
|
||||
endif()
|
||||
|
||||
# ---[ OpenCV
|
||||
if(USE_OPENCV)
|
||||
find_package(OpenCV QUIET COMPONENTS core highgui imgproc imgcodecs)
|
||||
if(NOT OpenCV_FOUND) # if not OpenCV 3.x, then imgcodecs are not found
|
||||
find_package(OpenCV REQUIRED COMPONENTS core highgui imgproc)
|
||||
endif()
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${OpenCV_INCLUDE_DIRS})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${OpenCV_LIBS})
|
||||
message(STATUS "OpenCV found (${OpenCV_CONFIG_PATH})")
|
||||
list(APPEND Caffe_DEFINITIONS PUBLIC -DUSE_OPENCV)
|
||||
endif()
|
||||
|
||||
# ---[ BLAS
|
||||
if(NOT APPLE)
|
||||
set(BLAS "Atlas" CACHE STRING "Selected BLAS library")
|
||||
set_property(CACHE BLAS PROPERTY STRINGS "Atlas;Open;MKL")
|
||||
|
||||
if(BLAS STREQUAL "Atlas" OR BLAS STREQUAL "atlas")
|
||||
find_package(Atlas REQUIRED)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${Atlas_INCLUDE_DIR})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${Atlas_LIBRARIES})
|
||||
elseif(BLAS STREQUAL "Open" OR BLAS STREQUAL "open")
|
||||
find_package(OpenBLAS REQUIRED)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${OpenBLAS_INCLUDE_DIR})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${OpenBLAS_LIB})
|
||||
elseif(BLAS STREQUAL "MKL" OR BLAS STREQUAL "mkl")
|
||||
find_package(MKL REQUIRED)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${MKL_INCLUDE_DIR})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${MKL_LIBRARIES})
|
||||
list(APPEND Caffe_DEFINITIONS PUBLIC -DUSE_MKL)
|
||||
endif()
|
||||
elseif(APPLE)
|
||||
find_package(vecLib REQUIRED)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${vecLib_INCLUDE_DIR})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${vecLib_LINKER_LIBS})
|
||||
|
||||
if(VECLIB_FOUND)
|
||||
if(NOT vecLib_INCLUDE_DIR MATCHES "^/System/Library/Frameworks/vecLib.framework.*")
|
||||
list(APPEND Caffe_DEFINITIONS PUBLIC -DUSE_ACCELERATE)
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# ---[ Python
|
||||
if(BUILD_python)
|
||||
if(NOT "${python_version}" VERSION_LESS "3.0.0")
|
||||
# use python3
|
||||
find_package(PythonInterp 3.0)
|
||||
find_package(PythonLibs 3.0)
|
||||
find_package(NumPy 1.7.1)
|
||||
# Find the matching boost python implementation
|
||||
set(version ${PYTHONLIBS_VERSION_STRING})
|
||||
|
||||
STRING( REGEX REPLACE "[^0-9]" "" boost_py_version ${version} )
|
||||
find_package(Boost 1.46 COMPONENTS "python-py${boost_py_version}")
|
||||
set(Boost_PYTHON_FOUND ${Boost_PYTHON-PY${boost_py_version}_FOUND})
|
||||
|
||||
while(NOT "${version}" STREQUAL "" AND NOT Boost_PYTHON_FOUND)
|
||||
STRING( REGEX REPLACE "([0-9.]+).[0-9]+" "\\1" version ${version} )
|
||||
|
||||
STRING( REGEX REPLACE "[^0-9]" "" boost_py_version ${version} )
|
||||
find_package(Boost 1.46 COMPONENTS "python-py${boost_py_version}")
|
||||
set(Boost_PYTHON_FOUND ${Boost_PYTHON-PY${boost_py_version}_FOUND})
|
||||
|
||||
STRING( REGEX MATCHALL "([0-9.]+).[0-9]+" has_more_version ${version} )
|
||||
if("${has_more_version}" STREQUAL "")
|
||||
break()
|
||||
endif()
|
||||
endwhile()
|
||||
if(NOT Boost_PYTHON_FOUND)
|
||||
find_package(Boost 1.46 COMPONENTS python)
|
||||
endif()
|
||||
else()
|
||||
# disable Python 3 search
|
||||
find_package(PythonInterp 2.7)
|
||||
find_package(PythonLibs 2.7)
|
||||
find_package(NumPy 1.7.1)
|
||||
find_package(Boost 1.46 COMPONENTS python)
|
||||
endif()
|
||||
if(PYTHONLIBS_FOUND AND NUMPY_FOUND AND Boost_PYTHON_FOUND)
|
||||
set(HAVE_PYTHON TRUE)
|
||||
if(BUILD_python_layer)
|
||||
list(APPEND Caffe_DEFINITIONS PRIVATE -DWITH_PYTHON_LAYER)
|
||||
list(APPEND Caffe_INCLUDE_DIRS PRIVATE ${PYTHON_INCLUDE_DIRS} ${NUMPY_INCLUDE_DIR} PUBLIC ${Boost_INCLUDE_DIRS})
|
||||
list(APPEND Caffe_LINKER_LIBS PRIVATE ${PYTHON_LIBRARIES} PUBLIC ${Boost_LIBRARIES})
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# ---[ Matlab
|
||||
if(BUILD_matlab)
|
||||
find_package(MatlabMex)
|
||||
if(MATLABMEX_FOUND)
|
||||
set(HAVE_MATLAB TRUE)
|
||||
endif()
|
||||
|
||||
# sudo apt-get install liboctave-dev
|
||||
find_program(Octave_compiler NAMES mkoctfile DOC "Octave C++ compiler")
|
||||
|
||||
if(HAVE_MATLAB AND Octave_compiler)
|
||||
set(Matlab_build_mex_using "Matlab" CACHE STRING "Select Matlab or Octave if both detected")
|
||||
set_property(CACHE Matlab_build_mex_using PROPERTY STRINGS "Matlab;Octave")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# ---[ Doxygen
|
||||
if(BUILD_docs)
|
||||
find_package(Doxygen)
|
||||
endif()
|
|
@ -0,0 +1,56 @@
|
|||
if (NOT __GFLAGS_INCLUDED) # guard against multiple includes
|
||||
set(__GFLAGS_INCLUDED TRUE)
|
||||
|
||||
# use the system-wide gflags if present
|
||||
find_package(GFlags)
|
||||
if (GFLAGS_FOUND)
|
||||
set(GFLAGS_EXTERNAL FALSE)
|
||||
else()
|
||||
# gflags will use pthreads if it's available in the system, so we must link with it
|
||||
find_package(Threads)
|
||||
|
||||
# build directory
|
||||
set(gflags_PREFIX ${CMAKE_BINARY_DIR}/external/gflags-prefix)
|
||||
# install directory
|
||||
set(gflags_INSTALL ${CMAKE_BINARY_DIR}/external/gflags-install)
|
||||
|
||||
# we build gflags statically, but want to link it into the caffe shared library
|
||||
# this requires position-independent code
|
||||
if (UNIX)
|
||||
set(GFLAGS_EXTRA_COMPILER_FLAGS "-fPIC")
|
||||
endif()
|
||||
|
||||
set(GFLAGS_CXX_FLAGS ${CMAKE_CXX_FLAGS} ${GFLAGS_EXTRA_COMPILER_FLAGS})
|
||||
set(GFLAGS_C_FLAGS ${CMAKE_C_FLAGS} ${GFLAGS_EXTRA_COMPILER_FLAGS})
|
||||
|
||||
ExternalProject_Add(gflags
|
||||
PREFIX ${gflags_PREFIX}
|
||||
GIT_REPOSITORY "https://github.com/gflags/gflags.git"
|
||||
GIT_TAG "v2.1.2"
|
||||
UPDATE_COMMAND ""
|
||||
INSTALL_DIR ${gflags_INSTALL}
|
||||
CMAKE_ARGS -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
|
||||
-DCMAKE_INSTALL_PREFIX=${gflags_INSTALL}
|
||||
-DBUILD_SHARED_LIBS=OFF
|
||||
-DBUILD_STATIC_LIBS=ON
|
||||
-DBUILD_PACKAGING=OFF
|
||||
-DBUILD_TESTING=OFF
|
||||
-DBUILD_NC_TESTS=OFF
|
||||
-BUILD_CONFIG_TESTS=OFF
|
||||
-DINSTALL_HEADERS=ON
|
||||
-DCMAKE_C_FLAGS=${GFLAGS_C_FLAGS}
|
||||
-DCMAKE_CXX_FLAGS=${GFLAGS_CXX_FLAGS}
|
||||
LOG_DOWNLOAD 1
|
||||
LOG_INSTALL 1
|
||||
)
|
||||
|
||||
set(GFLAGS_FOUND TRUE)
|
||||
set(GFLAGS_INCLUDE_DIRS ${gflags_INSTALL}/include)
|
||||
set(GFLAGS_LIBRARIES ${gflags_INSTALL}/lib/libgflags.a ${CMAKE_THREAD_LIBS_INIT})
|
||||
set(GFLAGS_LIBRARY_DIRS ${gflags_INSTALL}/lib)
|
||||
set(GFLAGS_EXTERNAL TRUE)
|
||||
|
||||
list(APPEND external_project_dependencies gflags)
|
||||
endif()
|
||||
|
||||
endif()
|
|
@ -0,0 +1,57 @@
|
|||
# glog depends on gflags
|
||||
include("cmake/External/gflags.cmake")
|
||||
|
||||
if (NOT __GLOG_INCLUDED)
|
||||
set(__GLOG_INCLUDED TRUE)
|
||||
|
||||
# try the system-wide glog first
|
||||
find_package(Glog)
|
||||
if (GLOG_FOUND)
|
||||
set(GLOG_EXTERNAL FALSE)
|
||||
else()
|
||||
# fetch and build glog from github
|
||||
|
||||
# build directory
|
||||
set(glog_PREFIX ${CMAKE_BINARY_DIR}/external/glog-prefix)
|
||||
# install directory
|
||||
set(glog_INSTALL ${CMAKE_BINARY_DIR}/external/glog-install)
|
||||
|
||||
# we build glog statically, but want to link it into the caffe shared library
|
||||
# this requires position-independent code
|
||||
if (UNIX)
|
||||
set(GLOG_EXTRA_COMPILER_FLAGS "-fPIC")
|
||||
endif()
|
||||
|
||||
set(GLOG_CXX_FLAGS ${CMAKE_CXX_FLAGS} ${GLOG_EXTRA_COMPILER_FLAGS})
|
||||
set(GLOG_C_FLAGS ${CMAKE_C_FLAGS} ${GLOG_EXTRA_COMPILER_FLAGS})
|
||||
|
||||
# depend on gflags if we're also building it
|
||||
if (GFLAGS_EXTERNAL)
|
||||
set(GLOG_DEPENDS gflags)
|
||||
endif()
|
||||
|
||||
ExternalProject_Add(glog
|
||||
DEPENDS ${GLOG_DEPENDS}
|
||||
PREFIX ${glog_PREFIX}
|
||||
GIT_REPOSITORY "https://github.com/google/glog"
|
||||
GIT_TAG "v0.3.4"
|
||||
UPDATE_COMMAND ""
|
||||
INSTALL_DIR ${gflags_INSTALL}
|
||||
PATCH_COMMAND autoreconf -i ${glog_PREFIX}/src/glog
|
||||
CONFIGURE_COMMAND env "CFLAGS=${GLOG_C_FLAGS}" "CXXFLAGS=${GLOG_CXX_FLAGS}" ${glog_PREFIX}/src/glog/configure --prefix=${glog_INSTALL} --enable-shared=no --enable-static=yes --with-gflags=${GFLAGS_LIBRARY_DIRS}/..
|
||||
LOG_DOWNLOAD 1
|
||||
LOG_CONFIGURE 1
|
||||
LOG_INSTALL 1
|
||||
)
|
||||
|
||||
set(GLOG_FOUND TRUE)
|
||||
set(GLOG_INCLUDE_DIRS ${glog_INSTALL}/include)
|
||||
set(GLOG_LIBRARIES ${GFLAGS_LIBRARIES} ${glog_INSTALL}/lib/libglog.a)
|
||||
set(GLOG_LIBRARY_DIRS ${glog_INSTALL}/lib)
|
||||
set(GLOG_EXTERNAL TRUE)
|
||||
|
||||
list(APPEND external_project_dependencies glog)
|
||||
endif()
|
||||
|
||||
endif()
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
# ---[ Configuration types
|
||||
set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "Possible configurations" FORCE)
|
||||
mark_as_advanced(CMAKE_CONFIGURATION_TYPES)
|
||||
|
||||
if(DEFINED CMAKE_BUILD_TYPE)
|
||||
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS ${CMAKE_CONFIGURATION_TYPES})
|
||||
endif()
|
||||
|
||||
# --[ If user doesn't specify build type then assume release
|
||||
if("${CMAKE_BUILD_TYPE}" STREQUAL "")
|
||||
set(CMAKE_BUILD_TYPE Release)
|
||||
endif()
|
||||
|
||||
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
|
||||
set(CMAKE_COMPILER_IS_CLANGXX TRUE)
|
||||
endif()
|
||||
|
||||
# ---[ Solution folders
|
||||
caffe_option(USE_PROJECT_FOLDERS "IDE Solution folders" (MSVC_IDE OR CMAKE_GENERATOR MATCHES Xcode) )
|
||||
|
||||
if(USE_PROJECT_FOLDERS)
|
||||
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
|
||||
set_property(GLOBAL PROPERTY PREDEFINED_TARGETS_FOLDER "CMakeTargets")
|
||||
endif()
|
||||
|
||||
# ---[ Install options
|
||||
if(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
|
||||
set(CMAKE_INSTALL_PREFIX "${PROJECT_BINARY_DIR}/install" CACHE PATH "Default install path" FORCE)
|
||||
endif()
|
||||
|
||||
# ---[ RPATH settings
|
||||
set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE CACHE BOOLEAN "Use link paths for shared library rpath")
|
||||
set(CMAKE_MACOSX_RPATH TRUE)
|
||||
|
||||
list(FIND CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES ${CMAKE_INSTALL_PREFIX}/lib __is_systtem_dir)
|
||||
if(${__is_systtem_dir} STREQUAL -1)
|
||||
set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/lib)
|
||||
endif()
|
||||
|
||||
# ---[ Funny target
|
||||
if(UNIX OR APPLE)
|
||||
add_custom_target(symlink_to_build COMMAND "ln" "-sf" "${PROJECT_BINARY_DIR}" "${PROJECT_SOURCE_DIR}/build"
|
||||
COMMENT "Adding symlink: <caffe_root>/build -> ${PROJECT_BINARY_DIR}" )
|
||||
endif()
|
||||
|
||||
# ---[ Set debug postfix
|
||||
set(Caffe_DEBUG_POSTFIX "-d")
|
||||
|
||||
set(Caffe_POSTFIX "")
|
||||
if(CMAKE_BUILD_TYPE MATCHES "Debug")
|
||||
set(Caffe_POSTFIX ${Caffe_DEBUG_POSTFIX})
|
||||
endif()
|
|
@ -0,0 +1,52 @@
|
|||
# Find the Atlas (and Lapack) libraries
|
||||
#
|
||||
# The following variables are optionally searched for defaults
|
||||
# Atlas_ROOT_DIR: Base directory where all Atlas components are found
|
||||
#
|
||||
# The following are set after configuration is done:
|
||||
# Atlas_FOUND
|
||||
# Atlas_INCLUDE_DIRS
|
||||
# Atlas_LIBRARIES
|
||||
# Atlas_LIBRARYRARY_DIRS
|
||||
|
||||
set(Atlas_INCLUDE_SEARCH_PATHS
|
||||
/usr/include/atlas
|
||||
/usr/include/atlas-base
|
||||
$ENV{Atlas_ROOT_DIR}
|
||||
$ENV{Atlas_ROOT_DIR}/include
|
||||
)
|
||||
|
||||
set(Atlas_LIB_SEARCH_PATHS
|
||||
/usr/lib/atlas
|
||||
/usr/lib/atlas-base
|
||||
$ENV{Atlas_ROOT_DIR}
|
||||
$ENV{Atlas_ROOT_DIR}/lib
|
||||
)
|
||||
|
||||
find_path(Atlas_CBLAS_INCLUDE_DIR NAMES cblas.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})
|
||||
find_path(Atlas_CLAPACK_INCLUDE_DIR NAMES clapack.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})
|
||||
|
||||
find_library(Atlas_CBLAS_LIBRARY NAMES ptcblas_r ptcblas cblas_r cblas PATHS ${Atlas_LIB_SEARCH_PATHS})
|
||||
find_library(Atlas_BLAS_LIBRARY NAMES atlas_r atlas PATHS ${Atlas_LIB_SEARCH_PATHS})
|
||||
find_library(Atlas_LAPACK_LIBRARY NAMES lapack alapack_r alapack lapack_atlas PATHS ${Atlas_LIB_SEARCH_PATHS})
|
||||
|
||||
set(LOOKED_FOR
|
||||
Atlas_CBLAS_INCLUDE_DIR
|
||||
Atlas_CLAPACK_INCLUDE_DIR
|
||||
|
||||
Atlas_CBLAS_LIBRARY
|
||||
Atlas_BLAS_LIBRARY
|
||||
Atlas_LAPACK_LIBRARY
|
||||
)
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(Atlas DEFAULT_MSG ${LOOKED_FOR})
|
||||
|
||||
if(ATLAS_FOUND)
|
||||
set(Atlas_INCLUDE_DIR ${Atlas_CBLAS_INCLUDE_DIR} ${Atlas_CLAPACK_INCLUDE_DIR})
|
||||
set(Atlas_LIBRARIES ${Atlas_LAPACK_LIBRARY} ${Atlas_CBLAS_LIBRARY} ${Atlas_BLAS_LIBRARY})
|
||||
mark_as_advanced(${LOOKED_FOR})
|
||||
|
||||
message(STATUS "Found Atlas (include: ${Atlas_CBLAS_INCLUDE_DIR}, library: ${Atlas_BLAS_LIBRARY})")
|
||||
endif(ATLAS_FOUND)
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
# - Try to find GFLAGS
|
||||
#
|
||||
# The following variables are optionally searched for defaults
|
||||
# GFLAGS_ROOT_DIR: Base directory where all GFLAGS components are found
|
||||
#
|
||||
# The following are set after configuration is done:
|
||||
# GFLAGS_FOUND
|
||||
# GFLAGS_INCLUDE_DIRS
|
||||
# GFLAGS_LIBRARIES
|
||||
# GFLAGS_LIBRARYRARY_DIRS
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
|
||||
set(GFLAGS_ROOT_DIR "" CACHE PATH "Folder contains Gflags")
|
||||
|
||||
# We are testing only a couple of files in the include directories
|
||||
if(WIN32)
|
||||
find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h
|
||||
PATHS ${GFLAGS_ROOT_DIR}/src/windows)
|
||||
else()
|
||||
find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h
|
||||
PATHS ${GFLAGS_ROOT_DIR})
|
||||
endif()
|
||||
|
||||
if(MSVC)
|
||||
find_library(GFLAGS_LIBRARY_RELEASE
|
||||
NAMES libgflags
|
||||
PATHS ${GFLAGS_ROOT_DIR}
|
||||
PATH_SUFFIXES Release)
|
||||
|
||||
find_library(GFLAGS_LIBRARY_DEBUG
|
||||
NAMES libgflags-debug
|
||||
PATHS ${GFLAGS_ROOT_DIR}
|
||||
PATH_SUFFIXES Debug)
|
||||
|
||||
set(GFLAGS_LIBRARY optimized ${GFLAGS_LIBRARY_RELEASE} debug ${GFLAGS_LIBRARY_DEBUG})
|
||||
else()
|
||||
find_library(GFLAGS_LIBRARY gflags)
|
||||
endif()
|
||||
|
||||
find_package_handle_standard_args(GFlags DEFAULT_MSG GFLAGS_INCLUDE_DIR GFLAGS_LIBRARY)
|
||||
|
||||
|
||||
if(GFLAGS_FOUND)
|
||||
set(GFLAGS_INCLUDE_DIRS ${GFLAGS_INCLUDE_DIR})
|
||||
set(GFLAGS_LIBRARIES ${GFLAGS_LIBRARY})
|
||||
message(STATUS "Found gflags (include: ${GFLAGS_INCLUDE_DIR}, library: ${GFLAGS_LIBRARY})")
|
||||
mark_as_advanced(GFLAGS_LIBRARY_DEBUG GFLAGS_LIBRARY_RELEASE
|
||||
GFLAGS_LIBRARY GFLAGS_INCLUDE_DIR GFLAGS_ROOT_DIR)
|
||||
endif()
|
|
@ -0,0 +1,48 @@
|
|||
# - Try to find Glog
|
||||
#
|
||||
# The following variables are optionally searched for defaults
|
||||
# GLOG_ROOT_DIR: Base directory where all GLOG components are found
|
||||
#
|
||||
# The following are set after configuration is done:
|
||||
# GLOG_FOUND
|
||||
# GLOG_INCLUDE_DIRS
|
||||
# GLOG_LIBRARIES
|
||||
# GLOG_LIBRARYRARY_DIRS
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
|
||||
set(GLOG_ROOT_DIR "" CACHE PATH "Folder contains Google glog")
|
||||
|
||||
if(WIN32)
|
||||
find_path(GLOG_INCLUDE_DIR glog/logging.h
|
||||
PATHS ${GLOG_ROOT_DIR}/src/windows)
|
||||
else()
|
||||
find_path(GLOG_INCLUDE_DIR glog/logging.h
|
||||
PATHS ${GLOG_ROOT_DIR})
|
||||
endif()
|
||||
|
||||
if(MSVC)
|
||||
find_library(GLOG_LIBRARY_RELEASE libglog_static
|
||||
PATHS ${GLOG_ROOT_DIR}
|
||||
PATH_SUFFIXES Release)
|
||||
|
||||
find_library(GLOG_LIBRARY_DEBUG libglog_static
|
||||
PATHS ${GLOG_ROOT_DIR}
|
||||
PATH_SUFFIXES Debug)
|
||||
|
||||
set(GLOG_LIBRARY optimized ${GLOG_LIBRARY_RELEASE} debug ${GLOG_LIBRARY_DEBUG})
|
||||
else()
|
||||
find_library(GLOG_LIBRARY glog
|
||||
PATHS ${GLOG_ROOT_DIR}
|
||||
PATH_SUFFIXES lib lib64)
|
||||
endif()
|
||||
|
||||
find_package_handle_standard_args(Glog DEFAULT_MSG GLOG_INCLUDE_DIR GLOG_LIBRARY)
|
||||
|
||||
if(GLOG_FOUND)
|
||||
set(GLOG_INCLUDE_DIRS ${GLOG_INCLUDE_DIR})
|
||||
set(GLOG_LIBRARIES ${GLOG_LIBRARY})
|
||||
message(STATUS "Found glog (include: ${GLOG_INCLUDE_DIR}, library: ${GLOG_LIBRARY})")
|
||||
mark_as_advanced(GLOG_ROOT_DIR GLOG_LIBRARY_RELEASE GLOG_LIBRARY_DEBUG
|
||||
GLOG_LIBRARY GLOG_INCLUDE_DIR)
|
||||
endif()
|
|
@ -0,0 +1,190 @@
|
|||
# - Find LAPACK library
|
||||
# This module finds an installed fortran library that implements the LAPACK
|
||||
# linear-algebra interface (see http://www.netlib.org/lapack/).
|
||||
#
|
||||
# The approach follows that taken for the autoconf macro file, acx_lapack.m4
|
||||
# (distributed at http://ac-archive.sourceforge.net/ac-archive/acx_lapack.html).
|
||||
#
|
||||
# This module sets the following variables:
|
||||
# LAPACK_FOUND - set to true if a library implementing the LAPACK interface is found
|
||||
# LAPACK_LIBRARIES - list of libraries (using full path name) for LAPACK
|
||||
|
||||
# Note: I do not think it is a good idea to mixup different BLAS/LAPACK versions
|
||||
# Hence, this script wants to find a Lapack library matching your Blas library
|
||||
|
||||
# Do nothing if LAPACK was found before
|
||||
IF(NOT LAPACK_FOUND)
|
||||
|
||||
SET(LAPACK_LIBRARIES)
|
||||
SET(LAPACK_INFO)
|
||||
|
||||
IF(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
|
||||
FIND_PACKAGE(BLAS)
|
||||
ELSE(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
|
||||
FIND_PACKAGE(BLAS REQUIRED)
|
||||
ENDIF(LAPACK_FIND_QUIETLY OR NOT LAPACK_FIND_REQUIRED)
|
||||
|
||||
# Old search lapack script
|
||||
include(CheckFortranFunctionExists)
|
||||
|
||||
macro(Check_Lapack_Libraries LIBRARIES _prefix _name _flags _list _blas)
|
||||
# This macro checks for the existence of the combination of fortran libraries
|
||||
# given by _list. If the combination is found, this macro checks (using the
|
||||
# Check_Fortran_Function_Exists macro) whether can link against that library
|
||||
# combination using the name of a routine given by _name using the linker
|
||||
# flags given by _flags. If the combination of libraries is found and passes
|
||||
# the link test, LIBRARIES is set to the list of complete library paths that
|
||||
# have been found. Otherwise, LIBRARIES is set to FALSE.
|
||||
# N.B. _prefix is the prefix applied to the names of all cached variables that
|
||||
# are generated internally and marked advanced by this macro.
|
||||
set(_libraries_work TRUE)
|
||||
set(${LIBRARIES})
|
||||
set(_combined_name)
|
||||
foreach(_library ${_list})
|
||||
set(_combined_name ${_combined_name}_${_library})
|
||||
if(_libraries_work)
|
||||
if (WIN32)
|
||||
find_library(${_prefix}_${_library}_LIBRARY
|
||||
NAMES ${_library} PATHS ENV LIB PATHS ENV PATH)
|
||||
else (WIN32)
|
||||
if(APPLE)
|
||||
find_library(${_prefix}_${_library}_LIBRARY
|
||||
NAMES ${_library}
|
||||
PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64
|
||||
ENV DYLD_LIBRARY_PATH)
|
||||
else(APPLE)
|
||||
find_library(${_prefix}_${_library}_LIBRARY
|
||||
NAMES ${_library}
|
||||
PATHS /usr/local/lib /usr/lib /usr/local/lib64 /usr/lib64
|
||||
ENV LD_LIBRARY_PATH)
|
||||
endif(APPLE)
|
||||
endif(WIN32)
|
||||
mark_as_advanced(${_prefix}_${_library}_LIBRARY)
|
||||
set(${LIBRARIES} ${${LIBRARIES}} ${${_prefix}_${_library}_LIBRARY})
|
||||
set(_libraries_work ${${_prefix}_${_library}_LIBRARY})
|
||||
endif(_libraries_work)
|
||||
endforeach(_library ${_list})
|
||||
if(_libraries_work)
|
||||
# Test this combination of libraries.
|
||||
set(CMAKE_REQUIRED_LIBRARIES ${_flags} ${${LIBRARIES}} ${_blas})
|
||||
if (CMAKE_Fortran_COMPILER_WORKS)
|
||||
check_fortran_function_exists(${_name} ${_prefix}${_combined_name}_WORKS)
|
||||
else (CMAKE_Fortran_COMPILER_WORKS)
|
||||
check_function_exists("${_name}_" ${_prefix}${_combined_name}_WORKS)
|
||||
endif (CMAKE_Fortran_COMPILER_WORKS)
|
||||
set(CMAKE_REQUIRED_LIBRARIES)
|
||||
mark_as_advanced(${_prefix}${_combined_name}_WORKS)
|
||||
set(_libraries_work ${${_prefix}${_combined_name}_WORKS})
|
||||
endif(_libraries_work)
|
||||
if(NOT _libraries_work)
|
||||
set(${LIBRARIES} FALSE)
|
||||
endif(NOT _libraries_work)
|
||||
endmacro(Check_Lapack_Libraries)
|
||||
|
||||
|
||||
if(BLAS_FOUND)
|
||||
|
||||
# Intel MKL
|
||||
IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "mkl"))
|
||||
IF(MKL_LAPACK_LIBRARIES)
|
||||
SET(LAPACK_LIBRARIES ${MKL_LAPACK_LIBRARIES} ${MKL_LIBRARIES})
|
||||
ELSE(MKL_LAPACK_LIBRARIES)
|
||||
SET(LAPACK_LIBRARIES ${MKL_LIBRARIES})
|
||||
ENDIF(MKL_LAPACK_LIBRARIES)
|
||||
SET(LAPACK_INCLUDE_DIR ${MKL_INCLUDE_DIR})
|
||||
SET(LAPACK_INFO "mkl")
|
||||
ENDIF()
|
||||
|
||||
# OpenBlas
|
||||
IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "open"))
|
||||
SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
|
||||
check_function_exists("cheev_" OPEN_LAPACK_WORKS)
|
||||
if(OPEN_LAPACK_WORKS)
|
||||
SET(LAPACK_INFO "open")
|
||||
else()
|
||||
message(STATUS "It seems OpenBlas has not been compiled with Lapack support")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# GotoBlas
|
||||
IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "goto"))
|
||||
SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
|
||||
check_function_exists("cheev_" GOTO_LAPACK_WORKS)
|
||||
if(GOTO_LAPACK_WORKS)
|
||||
SET(LAPACK_INFO "goto")
|
||||
else()
|
||||
message(STATUS "It seems GotoBlas has not been compiled with Lapack support")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# ACML
|
||||
IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "acml"))
|
||||
SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
|
||||
check_function_exists("cheev_" ACML_LAPACK_WORKS)
|
||||
if(ACML_LAPACK_WORKS)
|
||||
SET(LAPACK_INFO "acml")
|
||||
else()
|
||||
message(STATUS "Strangely, this ACML library does not support Lapack?!")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Accelerate
|
||||
IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "accelerate"))
|
||||
SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
|
||||
check_function_exists("cheev_" ACCELERATE_LAPACK_WORKS)
|
||||
if(ACCELERATE_LAPACK_WORKS)
|
||||
SET(LAPACK_INFO "accelerate")
|
||||
else()
|
||||
message(STATUS "Strangely, this Accelerate library does not support Lapack?!")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# vecLib
|
||||
IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "veclib"))
|
||||
SET(CMAKE_REQUIRED_LIBRARIES ${BLAS_LIBRARIES})
|
||||
check_function_exists("cheev_" VECLIB_LAPACK_WORKS)
|
||||
if(VECLIB_LAPACK_WORKS)
|
||||
SET(LAPACK_INFO "veclib")
|
||||
else()
|
||||
message(STATUS "Strangely, this vecLib library does not support Lapack?!")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Generic LAPACK library?
|
||||
IF((NOT LAPACK_INFO) AND (BLAS_INFO STREQUAL "generic"))
|
||||
check_lapack_libraries(
|
||||
LAPACK_LIBRARIES
|
||||
LAPACK
|
||||
cheev
|
||||
""
|
||||
"lapack"
|
||||
"${BLAS_LIBRARIES}"
|
||||
)
|
||||
if(LAPACK_LIBRARIES)
|
||||
SET(LAPACK_INFO "generic")
|
||||
endif(LAPACK_LIBRARIES)
|
||||
endif()
|
||||
|
||||
else(BLAS_FOUND)
|
||||
message(STATUS "LAPACK requires BLAS")
|
||||
endif(BLAS_FOUND)
|
||||
|
||||
if(LAPACK_INFO)
|
||||
set(LAPACK_FOUND TRUE)
|
||||
else(LAPACK_INFO)
|
||||
set(LAPACK_FOUND FALSE)
|
||||
endif(LAPACK_INFO)
|
||||
|
||||
IF (NOT LAPACK_FOUND AND LAPACK_FIND_REQUIRED)
|
||||
message(FATAL_ERROR "Cannot find a library with LAPACK API. Please specify library location.")
|
||||
ENDIF (NOT LAPACK_FOUND AND LAPACK_FIND_REQUIRED)
|
||||
IF(NOT LAPACK_FIND_QUIETLY)
|
||||
IF(LAPACK_FOUND)
|
||||
MESSAGE(STATUS "Found a library with LAPACK API. (${LAPACK_INFO})")
|
||||
ELSE(LAPACK_FOUND)
|
||||
MESSAGE(STATUS "Cannot find a library with LAPACK API. Not using LAPACK.")
|
||||
ENDIF(LAPACK_FOUND)
|
||||
ENDIF(NOT LAPACK_FIND_QUIETLY)
|
||||
|
||||
# Do nothing if LAPACK was found before
|
||||
ENDIF(NOT LAPACK_FOUND)
|
|
@ -0,0 +1,28 @@
|
|||
# Try to find the LMBD libraries and headers
|
||||
# LMDB_FOUND - system has LMDB lib
|
||||
# LMDB_INCLUDE_DIR - the LMDB include directory
|
||||
# LMDB_LIBRARIES - Libraries needed to use LMDB
|
||||
|
||||
# FindCWD based on FindGMP by:
|
||||
# Copyright (c) 2006, Laurent Montel, <montel@kde.org>
|
||||
#
|
||||
# Redistribution and use is allowed according to the terms of the BSD license.
|
||||
|
||||
# Adapted from FindCWD by:
|
||||
# Copyright 2013 Conrad Steenberg <conrad.steenberg@gmail.com>
|
||||
# Aug 31, 2013
|
||||
|
||||
find_path(LMDB_INCLUDE_DIR NAMES lmdb.h PATHS "$ENV{LMDB_DIR}/include")
|
||||
find_library(LMDB_LIBRARIES NAMES lmdb PATHS "$ENV{LMDB_DIR}/lib" )
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(LMDB DEFAULT_MSG LMDB_INCLUDE_DIR LMDB_LIBRARIES)
|
||||
|
||||
if(LMDB_FOUND)
|
||||
message(STATUS "Found lmdb (include: ${LMDB_INCLUDE_DIR}, library: ${LMDB_LIBRARIES})")
|
||||
mark_as_advanced(LMDB_INCLUDE_DIR LMDB_LIBRARIES)
|
||||
|
||||
caffe_parse_header(${LMDB_INCLUDE_DIR}/lmdb.h
|
||||
LMDB_VERSION_LINES MDB_VERSION_MAJOR MDB_VERSION_MINOR MDB_VERSION_PATCH)
|
||||
set(LMDB_VERSION "${MDB_VERSION_MAJOR}.${MDB_VERSION_MINOR}.${MDB_VERSION_PATCH}")
|
||||
endif()
|
|
@ -0,0 +1,44 @@
|
|||
# - Find LevelDB
|
||||
#
|
||||
# LevelDB_INCLUDES - List of LevelDB includes
|
||||
# LevelDB_LIBRARIES - List of libraries when using LevelDB.
|
||||
# LevelDB_FOUND - True if LevelDB found.
|
||||
|
||||
# Look for the header file.
|
||||
find_path(LevelDB_INCLUDE NAMES leveldb/db.h
|
||||
PATHS $ENV{LEVELDB_ROOT}/include /opt/local/include /usr/local/include /usr/include
|
||||
DOC "Path in which the file leveldb/db.h is located." )
|
||||
|
||||
# Look for the library.
|
||||
find_library(LevelDB_LIBRARY NAMES leveldb
|
||||
PATHS /usr/lib $ENV{LEVELDB_ROOT}/lib
|
||||
DOC "Path to leveldb library." )
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(LevelDB DEFAULT_MSG LevelDB_INCLUDE LevelDB_LIBRARY)
|
||||
|
||||
if(LEVELDB_FOUND)
|
||||
message(STATUS "Found LevelDB (include: ${LevelDB_INCLUDE}, library: ${LevelDB_LIBRARY})")
|
||||
set(LevelDB_INCLUDES ${LevelDB_INCLUDE})
|
||||
set(LevelDB_LIBRARIES ${LevelDB_LIBRARY})
|
||||
mark_as_advanced(LevelDB_INCLUDE LevelDB_LIBRARY)
|
||||
|
||||
if(EXISTS "${LevelDB_INCLUDE}/leveldb/db.h")
|
||||
file(STRINGS "${LevelDB_INCLUDE}/leveldb/db.h" __version_lines
|
||||
REGEX "static const int k[^V]+Version[ \t]+=[ \t]+[0-9]+;")
|
||||
|
||||
foreach(__line ${__version_lines})
|
||||
if(__line MATCHES "[^k]+kMajorVersion[ \t]+=[ \t]+([0-9]+);")
|
||||
set(LEVELDB_VERSION_MAJOR ${CMAKE_MATCH_1})
|
||||
elseif(__line MATCHES "[^k]+kMinorVersion[ \t]+=[ \t]+([0-9]+);")
|
||||
set(LEVELDB_VERSION_MINOR ${CMAKE_MATCH_1})
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
if(LEVELDB_VERSION_MAJOR AND LEVELDB_VERSION_MINOR)
|
||||
set(LEVELDB_VERSION "${LEVELDB_VERSION_MAJOR}.${LEVELDB_VERSION_MINOR}")
|
||||
endif()
|
||||
|
||||
caffe_clear_vars(__line __version_lines)
|
||||
endif()
|
||||
endif()
|
|
@ -0,0 +1,110 @@
|
|||
# Find the MKL libraries
|
||||
#
|
||||
# Options:
|
||||
#
|
||||
# MKL_USE_SINGLE_DYNAMIC_LIBRARY : use single dynamic library interface
|
||||
# MKL_USE_STATIC_LIBS : use static libraries
|
||||
# MKL_MULTI_THREADED : use multi-threading
|
||||
#
|
||||
# This module defines the following variables:
|
||||
#
|
||||
# MKL_FOUND : True mkl is found
|
||||
# MKL_INCLUDE_DIR : unclude directory
|
||||
# MKL_LIBRARIES : the libraries to link against.
|
||||
|
||||
|
||||
# ---[ Options
|
||||
caffe_option(MKL_USE_SINGLE_DYNAMIC_LIBRARY "Use single dynamic library interface" ON)
|
||||
caffe_option(MKL_USE_STATIC_LIBS "Use static libraries" OFF IF NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)
|
||||
caffe_option(MKL_MULTI_THREADED "Use multi-threading" ON IF NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)
|
||||
|
||||
# ---[ Root folders
|
||||
set(INTEL_ROOT "/opt/intel" CACHE PATH "Folder contains intel libs")
|
||||
find_path(MKL_ROOT include/mkl.h PATHS $ENV{MKLROOT} ${INTEL_ROOT}/mkl
|
||||
DOC "Folder contains MKL")
|
||||
|
||||
# ---[ Find include dir
|
||||
find_path(MKL_INCLUDE_DIR mkl.h PATHS ${MKL_ROOT} PATH_SUFFIXES include)
|
||||
set(__looked_for MKL_INCLUDE_DIR)
|
||||
|
||||
# ---[ Find libraries
|
||||
if(CMAKE_SIZEOF_VOID_P EQUAL 4)
|
||||
set(__path_suffixes lib lib/ia32)
|
||||
else()
|
||||
set(__path_suffixes lib lib/intel64)
|
||||
endif()
|
||||
|
||||
set(__mkl_libs "")
|
||||
if(MKL_USE_SINGLE_DYNAMIC_LIBRARY)
|
||||
list(APPEND __mkl_libs rt)
|
||||
else()
|
||||
if(CMAKE_SIZEOF_VOID_P EQUAL 4)
|
||||
if(WIN32)
|
||||
list(APPEND __mkl_libs intel_c)
|
||||
else()
|
||||
list(APPEND __mkl_libs intel gf)
|
||||
endif()
|
||||
else()
|
||||
list(APPEND __mkl_libs intel_lp64 gf_lp64)
|
||||
endif()
|
||||
|
||||
if(MKL_MULTI_THREADED)
|
||||
list(APPEND __mkl_libs intel_thread)
|
||||
else()
|
||||
list(APPEND __mkl_libs sequential)
|
||||
endif()
|
||||
|
||||
list(APPEND __mkl_libs core cdft_core)
|
||||
endif()
|
||||
|
||||
|
||||
foreach (__lib ${__mkl_libs})
|
||||
set(__mkl_lib "mkl_${__lib}")
|
||||
string(TOUPPER ${__mkl_lib} __mkl_lib_upper)
|
||||
|
||||
if(MKL_USE_STATIC_LIBS)
|
||||
set(__mkl_lib "lib${__mkl_lib}.a")
|
||||
endif()
|
||||
|
||||
find_library(${__mkl_lib_upper}_LIBRARY
|
||||
NAMES ${__mkl_lib}
|
||||
PATHS ${MKL_ROOT} "${MKL_INCLUDE_DIR}/.."
|
||||
PATH_SUFFIXES ${__path_suffixes}
|
||||
DOC "The path to Intel(R) MKL ${__mkl_lib} library")
|
||||
mark_as_advanced(${__mkl_lib_upper}_LIBRARY)
|
||||
|
||||
list(APPEND __looked_for ${__mkl_lib_upper}_LIBRARY)
|
||||
list(APPEND MKL_LIBRARIES ${${__mkl_lib_upper}_LIBRARY})
|
||||
endforeach()
|
||||
|
||||
|
||||
if(NOT MKL_USE_SINGLE_DYNAMIC_LIBRARY)
|
||||
if (MKL_USE_STATIC_LIBS)
|
||||
set(__iomp5_libs iomp5 libiomp5mt.lib)
|
||||
else()
|
||||
set(__iomp5_libs iomp5 libiomp5md.lib)
|
||||
endif()
|
||||
|
||||
if(WIN32)
|
||||
find_path(INTEL_INCLUDE_DIR omp.h PATHS ${INTEL_ROOT} PATH_SUFFIXES include)
|
||||
list(APPEND __looked_for INTEL_INCLUDE_DIR)
|
||||
endif()
|
||||
|
||||
find_library(MKL_RTL_LIBRARY ${__iomp5_libs}
|
||||
PATHS ${INTEL_RTL_ROOT} ${INTEL_ROOT}/compiler ${MKL_ROOT}/.. ${MKL_ROOT}/../compiler
|
||||
PATH_SUFFIXES ${__path_suffixes}
|
||||
DOC "Path to Path to OpenMP runtime library")
|
||||
|
||||
list(APPEND __looked_for MKL_RTL_LIBRARY)
|
||||
list(APPEND MKL_LIBRARIES ${MKL_RTL_LIBRARY})
|
||||
endif()
|
||||
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(MKL DEFAULT_MSG ${__looked_for})
|
||||
|
||||
if(MKL_FOUND)
|
||||
message(STATUS "Found MKL (include: ${MKL_INCLUDE_DIR}, lib: ${MKL_LIBRARIES}")
|
||||
endif()
|
||||
|
||||
caffe_clear_vars(__looked_for __mkl_libs __path_suffixes __lib_suffix __iomp5_libs)
|
|
@ -0,0 +1,48 @@
|
|||
# This module looks for MatlabMex compiler
|
||||
# Defines variables:
|
||||
# Matlab_DIR - Matlab root dir
|
||||
# Matlab_mex - path to mex compiler
|
||||
# Matlab_mexext - path to mexext
|
||||
|
||||
if(MSVC)
|
||||
foreach(__ver "9.30" "7.14" "7.11" "7.10" "7.9" "7.8" "7.7")
|
||||
get_filename_component(__matlab_root "[HKEY_LOCAL_MACHINE\\SOFTWARE\\MathWorks\\MATLAB\\${__ver};MATLABROOT]" ABSOLUTE)
|
||||
if(__matlab_root)
|
||||
break()
|
||||
endif()
|
||||
endforeach()
|
||||
endif()
|
||||
|
||||
if(APPLE)
|
||||
foreach(__ver "R2014b" "R2014a" "R2013b" "R2013a" "R2012b" "R2012a" "R2011b" "R2011a" "R2010b" "R2010a")
|
||||
if(EXISTS /Applications/MATLAB_${__ver}.app)
|
||||
set(__matlab_root /Applications/MATLAB_${__ver}.app)
|
||||
break()
|
||||
endif()
|
||||
endforeach()
|
||||
endif()
|
||||
|
||||
if(UNIX)
|
||||
execute_process(COMMAND which matlab OUTPUT_STRIP_TRAILING_WHITESPACE
|
||||
OUTPUT_VARIABLE __out RESULT_VARIABLE __res)
|
||||
|
||||
if(__res MATCHES 0) # Suppress `readlink` warning if `which` returned nothing
|
||||
execute_process(COMMAND which matlab COMMAND xargs readlink
|
||||
COMMAND xargs dirname COMMAND xargs dirname COMMAND xargs echo -n
|
||||
OUTPUT_VARIABLE __matlab_root OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
|
||||
find_path(Matlab_DIR NAMES bin/mex bin/mexext PATHS ${__matlab_root}
|
||||
DOC "Matlab directory" NO_DEFAULT_PATH)
|
||||
|
||||
find_program(Matlab_mex NAMES mex mex.bat HINTS ${Matlab_DIR} PATH_SUFFIXES bin NO_DEFAULT_PATH)
|
||||
find_program(Matlab_mexext NAMES mexext mexext.bat HINTS ${Matlab_DIR} PATH_SUFFIXES bin NO_DEFAULT_PATH)
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(MatlabMex DEFAULT_MSG Matlab_mex Matlab_mexext)
|
||||
|
||||
if(MATLABMEX_FOUND)
|
||||
mark_as_advanced(Matlab_mex Matlab_mexext)
|
||||
endif()
|
|
@ -0,0 +1,26 @@
|
|||
set(NCCL_INC_PATHS
|
||||
/usr/include
|
||||
/usr/local/include
|
||||
$ENV{NCCL_DIR}/include
|
||||
)
|
||||
|
||||
set(NCCL_LIB_PATHS
|
||||
/lib
|
||||
/lib64
|
||||
/usr/lib
|
||||
/usr/lib64
|
||||
/usr/local/lib
|
||||
/usr/local/lib64
|
||||
$ENV{NCCL_DIR}/lib
|
||||
)
|
||||
|
||||
find_path(NCCL_INCLUDE_DIR NAMES nccl.h PATHS ${NCCL_INC_PATHS})
|
||||
find_library(NCCL_LIBRARIES NAMES nccl PATHS ${NCCL_LIB_PATHS})
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(NCCL DEFAULT_MSG NCCL_INCLUDE_DIR NCCL_LIBRARIES)
|
||||
|
||||
if (NCCL_FOUND)
|
||||
message(STATUS "Found NCCL (include: ${NCCL_INCLUDE_DIR}, library: ${NCCL_LIBRARIES})")
|
||||
mark_as_advanced(NCCL_INCLUDE_DIR NCCL_LIBRARIES)
|
||||
endif ()
|
|
@ -0,0 +1,58 @@
|
|||
# - Find the NumPy libraries
|
||||
# This module finds if NumPy is installed, and sets the following variables
|
||||
# indicating where it is.
|
||||
#
|
||||
# TODO: Update to provide the libraries and paths for linking npymath lib.
|
||||
#
|
||||
# NUMPY_FOUND - was NumPy found
|
||||
# NUMPY_VERSION - the version of NumPy found as a string
|
||||
# NUMPY_VERSION_MAJOR - the major version number of NumPy
|
||||
# NUMPY_VERSION_MINOR - the minor version number of NumPy
|
||||
# NUMPY_VERSION_PATCH - the patch version number of NumPy
|
||||
# NUMPY_VERSION_DECIMAL - e.g. version 1.6.1 is 10601
|
||||
# NUMPY_INCLUDE_DIR - path to the NumPy include files
|
||||
|
||||
unset(NUMPY_VERSION)
|
||||
unset(NUMPY_INCLUDE_DIR)
|
||||
|
||||
if(PYTHONINTERP_FOUND)
|
||||
execute_process(COMMAND "${PYTHON_EXECUTABLE}" "-c"
|
||||
"import numpy as n; print(n.__version__); print(n.get_include());"
|
||||
RESULT_VARIABLE __result
|
||||
OUTPUT_VARIABLE __output
|
||||
OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
|
||||
if(__result MATCHES 0)
|
||||
string(REGEX REPLACE ";" "\\\\;" __values ${__output})
|
||||
string(REGEX REPLACE "\r?\n" ";" __values ${__values})
|
||||
list(GET __values 0 NUMPY_VERSION)
|
||||
list(GET __values 1 NUMPY_INCLUDE_DIR)
|
||||
|
||||
string(REGEX MATCH "^([0-9])+\\.([0-9])+\\.([0-9])+" __ver_check "${NUMPY_VERSION}")
|
||||
if(NOT "${__ver_check}" STREQUAL "")
|
||||
set(NUMPY_VERSION_MAJOR ${CMAKE_MATCH_1})
|
||||
set(NUMPY_VERSION_MINOR ${CMAKE_MATCH_2})
|
||||
set(NUMPY_VERSION_PATCH ${CMAKE_MATCH_3})
|
||||
math(EXPR NUMPY_VERSION_DECIMAL
|
||||
"(${NUMPY_VERSION_MAJOR} * 10000) + (${NUMPY_VERSION_MINOR} * 100) + ${NUMPY_VERSION_PATCH}")
|
||||
string(REGEX REPLACE "\\\\" "/" NUMPY_INCLUDE_DIR ${NUMPY_INCLUDE_DIR})
|
||||
else()
|
||||
unset(NUMPY_VERSION)
|
||||
unset(NUMPY_INCLUDE_DIR)
|
||||
message(STATUS "Requested NumPy version and include path, but got instead:\n${__output}\n")
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
message(STATUS "To find NumPy Python interpretator is required to be found.")
|
||||
endif()
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(NumPy REQUIRED_VARS NUMPY_INCLUDE_DIR NUMPY_VERSION
|
||||
VERSION_VAR NUMPY_VERSION)
|
||||
|
||||
if(NUMPY_FOUND)
|
||||
message(STATUS "NumPy ver. ${NUMPY_VERSION} found (include: ${NUMPY_INCLUDE_DIR})")
|
||||
endif()
|
||||
|
||||
caffe_clear_vars(__result __output __error_value __values __ver_check __error_value)
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
|
||||
|
||||
SET(Open_BLAS_INCLUDE_SEARCH_PATHS
|
||||
/usr/include
|
||||
/usr/include/openblas
|
||||
/usr/include/openblas-base
|
||||
/usr/local/include
|
||||
/usr/local/include/openblas
|
||||
/usr/local/include/openblas-base
|
||||
/opt/OpenBLAS/include
|
||||
$ENV{OpenBLAS_HOME}
|
||||
$ENV{OpenBLAS_HOME}/include
|
||||
)
|
||||
|
||||
SET(Open_BLAS_LIB_SEARCH_PATHS
|
||||
/lib/
|
||||
/lib/openblas-base
|
||||
/lib64/
|
||||
/usr/lib
|
||||
/usr/lib/openblas-base
|
||||
/usr/lib64
|
||||
/usr/local/lib
|
||||
/usr/local/lib64
|
||||
/opt/OpenBLAS/lib
|
||||
$ENV{OpenBLAS}cd
|
||||
$ENV{OpenBLAS}/lib
|
||||
$ENV{OpenBLAS_HOME}
|
||||
$ENV{OpenBLAS_HOME}/lib
|
||||
)
|
||||
|
||||
FIND_PATH(OpenBLAS_INCLUDE_DIR NAMES cblas.h PATHS ${Open_BLAS_INCLUDE_SEARCH_PATHS})
|
||||
FIND_LIBRARY(OpenBLAS_LIB NAMES openblas PATHS ${Open_BLAS_LIB_SEARCH_PATHS})
|
||||
|
||||
SET(OpenBLAS_FOUND ON)
|
||||
|
||||
# Check include files
|
||||
IF(NOT OpenBLAS_INCLUDE_DIR)
|
||||
SET(OpenBLAS_FOUND OFF)
|
||||
MESSAGE(STATUS "Could not find OpenBLAS include. Turning OpenBLAS_FOUND off")
|
||||
ENDIF()
|
||||
|
||||
# Check libraries
|
||||
IF(NOT OpenBLAS_LIB)
|
||||
SET(OpenBLAS_FOUND OFF)
|
||||
MESSAGE(STATUS "Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off")
|
||||
ENDIF()
|
||||
|
||||
IF (OpenBLAS_FOUND)
|
||||
IF (NOT OpenBLAS_FIND_QUIETLY)
|
||||
MESSAGE(STATUS "Found OpenBLAS libraries: ${OpenBLAS_LIB}")
|
||||
MESSAGE(STATUS "Found OpenBLAS include: ${OpenBLAS_INCLUDE_DIR}")
|
||||
ENDIF (NOT OpenBLAS_FIND_QUIETLY)
|
||||
ELSE (OpenBLAS_FOUND)
|
||||
IF (OpenBLAS_FIND_REQUIRED)
|
||||
MESSAGE(FATAL_ERROR "Could not find OpenBLAS")
|
||||
ENDIF (OpenBLAS_FIND_REQUIRED)
|
||||
ENDIF (OpenBLAS_FOUND)
|
||||
|
||||
MARK_AS_ADVANCED(
|
||||
OpenBLAS_INCLUDE_DIR
|
||||
OpenBLAS_LIB
|
||||
OpenBLAS
|
||||
)
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
# Find the Snappy libraries
|
||||
#
|
||||
# The following variables are optionally searched for defaults
|
||||
# Snappy_ROOT_DIR: Base directory where all Snappy components are found
|
||||
#
|
||||
# The following are set after configuration is done:
|
||||
# SNAPPY_FOUND
|
||||
# Snappy_INCLUDE_DIR
|
||||
# Snappy_LIBRARIES
|
||||
|
||||
find_path(Snappy_INCLUDE_DIR NAMES snappy.h
|
||||
PATHS ${SNAPPY_ROOT_DIR} ${SNAPPY_ROOT_DIR}/include)
|
||||
|
||||
find_library(Snappy_LIBRARIES NAMES snappy
|
||||
PATHS ${SNAPPY_ROOT_DIR} ${SNAPPY_ROOT_DIR}/lib)
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(Snappy DEFAULT_MSG Snappy_INCLUDE_DIR Snappy_LIBRARIES)
|
||||
|
||||
if(SNAPPY_FOUND)
|
||||
message(STATUS "Found Snappy (include: ${Snappy_INCLUDE_DIR}, library: ${Snappy_LIBRARIES})")
|
||||
mark_as_advanced(Snappy_INCLUDE_DIR Snappy_LIBRARIES)
|
||||
|
||||
caffe_parse_header(${Snappy_INCLUDE_DIR}/snappy-stubs-public.h
|
||||
SNAPPY_VERION_LINES SNAPPY_MAJOR SNAPPY_MINOR SNAPPY_PATCHLEVEL)
|
||||
set(Snappy_VERSION "${SNAPPY_MAJOR}.${SNAPPY_MINOR}.${SNAPPY_PATCHLEVEL}")
|
||||
endif()
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
# Find the vecLib libraries as part of Accelerate.framework or as standalon framework
|
||||
#
|
||||
# The following are set after configuration is done:
|
||||
# VECLIB_FOUND
|
||||
# vecLib_INCLUDE_DIR
|
||||
# vecLib_LINKER_LIBS
|
||||
|
||||
|
||||
if(NOT APPLE)
|
||||
return()
|
||||
endif()
|
||||
|
||||
set(__veclib_include_suffix "Frameworks/vecLib.framework/Versions/Current/Headers")
|
||||
|
||||
find_path(vecLib_INCLUDE_DIR vecLib.h
|
||||
DOC "vecLib include directory"
|
||||
PATHS /System/Library/Frameworks/Accelerate.framework/Versions/Current/${__veclib_include_suffix}
|
||||
/System/Library/${__veclib_include_suffix}
|
||||
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/
|
||||
NO_DEFAULT_PATH)
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(vecLib DEFAULT_MSG vecLib_INCLUDE_DIR)
|
||||
|
||||
if(VECLIB_FOUND)
|
||||
if(vecLib_INCLUDE_DIR MATCHES "^/System/Library/Frameworks/vecLib.framework.*")
|
||||
set(vecLib_LINKER_LIBS -lcblas "-framework vecLib")
|
||||
message(STATUS "Found standalone vecLib.framework")
|
||||
else()
|
||||
set(vecLib_LINKER_LIBS -lcblas "-framework Accelerate")
|
||||
message(STATUS "Found vecLib as part of Accelerate.framework")
|
||||
endif()
|
||||
|
||||
mark_as_advanced(vecLib_INCLUDE_DIR)
|
||||
endif()
|
|
@ -0,0 +1,90 @@
|
|||
# Finds Google Protocol Buffers library and compilers and extends
|
||||
# the standard cmake script with version and python generation support
|
||||
|
||||
find_package( Protobuf REQUIRED )
|
||||
list(APPEND Caffe_INCLUDE_DIRS PUBLIC ${PROTOBUF_INCLUDE_DIR})
|
||||
list(APPEND Caffe_LINKER_LIBS PUBLIC ${PROTOBUF_LIBRARIES})
|
||||
|
||||
# As of Ubuntu 14.04 protoc is no longer a part of libprotobuf-dev package
|
||||
# and should be installed separately as in: sudo apt-get install protobuf-compiler
|
||||
if(EXISTS ${PROTOBUF_PROTOC_EXECUTABLE})
|
||||
message(STATUS "Found PROTOBUF Compiler: ${PROTOBUF_PROTOC_EXECUTABLE}")
|
||||
else()
|
||||
message(FATAL_ERROR "Could not find PROTOBUF Compiler")
|
||||
endif()
|
||||
|
||||
if(PROTOBUF_FOUND)
|
||||
# fetches protobuf version
|
||||
caffe_parse_header(${PROTOBUF_INCLUDE_DIR}/google/protobuf/stubs/common.h VERION_LINE GOOGLE_PROTOBUF_VERSION)
|
||||
string(REGEX MATCH "([0-9])00([0-9])00([0-9])" PROTOBUF_VERSION ${GOOGLE_PROTOBUF_VERSION})
|
||||
set(PROTOBUF_VERSION "${CMAKE_MATCH_1}.${CMAKE_MATCH_2}.${CMAKE_MATCH_3}")
|
||||
unset(GOOGLE_PROTOBUF_VERSION)
|
||||
endif()
|
||||
|
||||
# place where to generate protobuf sources
|
||||
set(proto_gen_folder "${PROJECT_BINARY_DIR}/include/caffe/proto")
|
||||
include_directories("${PROJECT_BINARY_DIR}/include")
|
||||
|
||||
set(PROTOBUF_GENERATE_CPP_APPEND_PATH TRUE)
|
||||
|
||||
################################################################################################
|
||||
# Modification of standard 'protobuf_generate_cpp()' with output dir parameter and python support
|
||||
# Usage:
|
||||
# caffe_protobuf_generate_cpp_py(<output_dir> <srcs_var> <hdrs_var> <python_var> <proto_files>)
|
||||
function(caffe_protobuf_generate_cpp_py output_dir srcs_var hdrs_var python_var)
|
||||
if(NOT ARGN)
|
||||
message(SEND_ERROR "Error: caffe_protobuf_generate_cpp_py() called without any proto files")
|
||||
return()
|
||||
endif()
|
||||
|
||||
if(PROTOBUF_GENERATE_CPP_APPEND_PATH)
|
||||
# Create an include path for each file specified
|
||||
foreach(fil ${ARGN})
|
||||
get_filename_component(abs_fil ${fil} ABSOLUTE)
|
||||
get_filename_component(abs_path ${abs_fil} PATH)
|
||||
list(FIND _protoc_include ${abs_path} _contains_already)
|
||||
if(${_contains_already} EQUAL -1)
|
||||
list(APPEND _protoc_include -I ${abs_path})
|
||||
endif()
|
||||
endforeach()
|
||||
else()
|
||||
set(_protoc_include -I ${CMAKE_CURRENT_SOURCE_DIR})
|
||||
endif()
|
||||
|
||||
if(DEFINED PROTOBUF_IMPORT_DIRS)
|
||||
foreach(dir ${PROTOBUF_IMPORT_DIRS})
|
||||
get_filename_component(abs_path ${dir} ABSOLUTE)
|
||||
list(FIND _protoc_include ${abs_path} _contains_already)
|
||||
if(${_contains_already} EQUAL -1)
|
||||
list(APPEND _protoc_include -I ${abs_path})
|
||||
endif()
|
||||
endforeach()
|
||||
endif()
|
||||
|
||||
set(${srcs_var})
|
||||
set(${hdrs_var})
|
||||
set(${python_var})
|
||||
foreach(fil ${ARGN})
|
||||
get_filename_component(abs_fil ${fil} ABSOLUTE)
|
||||
get_filename_component(fil_we ${fil} NAME_WE)
|
||||
|
||||
list(APPEND ${srcs_var} "${output_dir}/${fil_we}.pb.cc")
|
||||
list(APPEND ${hdrs_var} "${output_dir}/${fil_we}.pb.h")
|
||||
list(APPEND ${python_var} "${output_dir}/${fil_we}_pb2.py")
|
||||
|
||||
add_custom_command(
|
||||
OUTPUT "${output_dir}/${fil_we}.pb.cc"
|
||||
"${output_dir}/${fil_we}.pb.h"
|
||||
"${output_dir}/${fil_we}_pb2.py"
|
||||
COMMAND ${CMAKE_COMMAND} -E make_directory "${output_dir}"
|
||||
COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --cpp_out ${output_dir} ${_protoc_include} ${abs_fil}
|
||||
COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --python_out ${output_dir} ${_protoc_include} ${abs_fil}
|
||||
DEPENDS ${abs_fil}
|
||||
COMMENT "Running C++/Python protocol buffer compiler on ${fil}" VERBATIM )
|
||||
endforeach()
|
||||
|
||||
set_source_files_properties(${${srcs_var}} ${${hdrs_var}} ${${python_var}} PROPERTIES GENERATED TRUE)
|
||||
set(${srcs_var} ${${srcs_var}} PARENT_SCOPE)
|
||||
set(${hdrs_var} ${${hdrs_var}} PARENT_SCOPE)
|
||||
set(${python_var} ${${python_var}} PARENT_SCOPE)
|
||||
endfunction()
|
|
@ -0,0 +1,178 @@
|
|||
################################################################################################
|
||||
# Caffe status report function.
|
||||
# Automatically align right column and selects text based on condition.
|
||||
# Usage:
|
||||
# caffe_status(<text>)
|
||||
# caffe_status(<heading> <value1> [<value2> ...])
|
||||
# caffe_status(<heading> <condition> THEN <text for TRUE> ELSE <text for FALSE> )
|
||||
function(caffe_status text)
|
||||
set(status_cond)
|
||||
set(status_then)
|
||||
set(status_else)
|
||||
|
||||
set(status_current_name "cond")
|
||||
foreach(arg ${ARGN})
|
||||
if(arg STREQUAL "THEN")
|
||||
set(status_current_name "then")
|
||||
elseif(arg STREQUAL "ELSE")
|
||||
set(status_current_name "else")
|
||||
else()
|
||||
list(APPEND status_${status_current_name} ${arg})
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
if(DEFINED status_cond)
|
||||
set(status_placeholder_length 23)
|
||||
string(RANDOM LENGTH ${status_placeholder_length} ALPHABET " " status_placeholder)
|
||||
string(LENGTH "${text}" status_text_length)
|
||||
if(status_text_length LESS status_placeholder_length)
|
||||
string(SUBSTRING "${text}${status_placeholder}" 0 ${status_placeholder_length} status_text)
|
||||
elseif(DEFINED status_then OR DEFINED status_else)
|
||||
message(STATUS "${text}")
|
||||
set(status_text "${status_placeholder}")
|
||||
else()
|
||||
set(status_text "${text}")
|
||||
endif()
|
||||
|
||||
if(DEFINED status_then OR DEFINED status_else)
|
||||
if(${status_cond})
|
||||
string(REPLACE ";" " " status_then "${status_then}")
|
||||
string(REGEX REPLACE "^[ \t]+" "" status_then "${status_then}")
|
||||
message(STATUS "${status_text} ${status_then}")
|
||||
else()
|
||||
string(REPLACE ";" " " status_else "${status_else}")
|
||||
string(REGEX REPLACE "^[ \t]+" "" status_else "${status_else}")
|
||||
message(STATUS "${status_text} ${status_else}")
|
||||
endif()
|
||||
else()
|
||||
string(REPLACE ";" " " status_cond "${status_cond}")
|
||||
string(REGEX REPLACE "^[ \t]+" "" status_cond "${status_cond}")
|
||||
message(STATUS "${status_text} ${status_cond}")
|
||||
endif()
|
||||
else()
|
||||
message(STATUS "${text}")
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
|
||||
################################################################################################
|
||||
# Function for fetching Caffe version from git and headers
|
||||
# Usage:
|
||||
# caffe_extract_caffe_version()
|
||||
function(caffe_extract_caffe_version)
|
||||
set(Caffe_GIT_VERSION "unknown")
|
||||
find_package(Git)
|
||||
if(GIT_FOUND)
|
||||
execute_process(COMMAND ${GIT_EXECUTABLE} describe --tags --always --dirty
|
||||
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE
|
||||
WORKING_DIRECTORY "${PROJECT_SOURCE_DIR}"
|
||||
OUTPUT_VARIABLE Caffe_GIT_VERSION
|
||||
RESULT_VARIABLE __git_result)
|
||||
if(NOT ${__git_result} EQUAL 0)
|
||||
set(Caffe_GIT_VERSION "unknown")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
set(Caffe_GIT_VERSION ${Caffe_GIT_VERSION} PARENT_SCOPE)
|
||||
set(Caffe_VERSION "<TODO> (Caffe doesn't declare its version in headers)" PARENT_SCOPE)
|
||||
|
||||
# caffe_parse_header(${Caffe_INCLUDE_DIR}/caffe/version.hpp Caffe_VERSION_LINES CAFFE_MAJOR CAFFE_MINOR CAFFE_PATCH)
|
||||
# set(Caffe_VERSION "${CAFFE_MAJOR}.${CAFFE_MINOR}.${CAFFE_PATCH}" PARENT_SCOPE)
|
||||
|
||||
# or for #define Caffe_VERSION "x.x.x"
|
||||
# caffe_parse_header_single_define(Caffe ${Caffe_INCLUDE_DIR}/caffe/version.hpp Caffe_VERSION)
|
||||
# set(Caffe_VERSION ${Caffe_VERSION_STRING} PARENT_SCOPE)
|
||||
|
||||
endfunction()
|
||||
|
||||
|
||||
################################################################################################
|
||||
# Prints accumulated caffe configuration summary
|
||||
# Usage:
|
||||
# caffe_print_configuration_summary()
|
||||
|
||||
function(caffe_print_configuration_summary)
|
||||
caffe_extract_caffe_version()
|
||||
set(Caffe_VERSION ${Caffe_VERSION} PARENT_SCOPE)
|
||||
|
||||
caffe_merge_flag_lists(__flags_rel CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS)
|
||||
caffe_merge_flag_lists(__flags_deb CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS)
|
||||
|
||||
caffe_status("")
|
||||
caffe_status("******************* Caffe Configuration Summary *******************")
|
||||
caffe_status("General:")
|
||||
caffe_status(" Version : ${CAFFE_TARGET_VERSION}")
|
||||
caffe_status(" Git : ${Caffe_GIT_VERSION}")
|
||||
caffe_status(" System : ${CMAKE_SYSTEM_NAME}")
|
||||
caffe_status(" C++ compiler : ${CMAKE_CXX_COMPILER}")
|
||||
caffe_status(" Release CXX flags : ${__flags_rel}")
|
||||
caffe_status(" Debug CXX flags : ${__flags_deb}")
|
||||
caffe_status(" Build type : ${CMAKE_BUILD_TYPE}")
|
||||
caffe_status("")
|
||||
caffe_status(" BUILD_SHARED_LIBS : ${BUILD_SHARED_LIBS}")
|
||||
caffe_status(" BUILD_python : ${BUILD_python}")
|
||||
caffe_status(" BUILD_matlab : ${BUILD_matlab}")
|
||||
caffe_status(" BUILD_docs : ${BUILD_docs}")
|
||||
caffe_status(" CPU_ONLY : ${CPU_ONLY}")
|
||||
caffe_status(" USE_OPENCV : ${USE_OPENCV}")
|
||||
caffe_status(" USE_LEVELDB : ${USE_LEVELDB}")
|
||||
caffe_status(" USE_LMDB : ${USE_LMDB}")
|
||||
caffe_status(" USE_NCCL : ${USE_NCCL}")
|
||||
caffe_status(" ALLOW_LMDB_NOLOCK : ${ALLOW_LMDB_NOLOCK}")
|
||||
caffe_status("")
|
||||
caffe_status("Dependencies:")
|
||||
caffe_status(" BLAS : " APPLE THEN "Yes (vecLib)" ELSE "Yes (${BLAS})")
|
||||
caffe_status(" Boost : Yes (ver. ${Boost_MAJOR_VERSION}.${Boost_MINOR_VERSION})")
|
||||
caffe_status(" glog : Yes")
|
||||
caffe_status(" gflags : Yes")
|
||||
caffe_status(" protobuf : " PROTOBUF_FOUND THEN "Yes (ver. ${PROTOBUF_VERSION})" ELSE "No" )
|
||||
if(USE_LMDB)
|
||||
caffe_status(" lmdb : " LMDB_FOUND THEN "Yes (ver. ${LMDB_VERSION})" ELSE "No")
|
||||
endif()
|
||||
if(USE_LEVELDB)
|
||||
caffe_status(" LevelDB : " LEVELDB_FOUND THEN "Yes (ver. ${LEVELDB_VERSION})" ELSE "No")
|
||||
caffe_status(" Snappy : " SNAPPY_FOUND THEN "Yes (ver. ${Snappy_VERSION})" ELSE "No" )
|
||||
endif()
|
||||
if(USE_OPENCV)
|
||||
caffe_status(" OpenCV : Yes (ver. ${OpenCV_VERSION})")
|
||||
endif()
|
||||
caffe_status(" CUDA : " HAVE_CUDA THEN "Yes (ver. ${CUDA_VERSION})" ELSE "No" )
|
||||
caffe_status("")
|
||||
if(HAVE_CUDA)
|
||||
caffe_status("NVIDIA CUDA:")
|
||||
caffe_status(" Target GPU(s) : ${CUDA_ARCH_NAME}" )
|
||||
caffe_status(" GPU arch(s) : ${NVCC_FLAGS_EXTRA_readable}")
|
||||
if(USE_CUDNN)
|
||||
caffe_status(" cuDNN : " HAVE_CUDNN THEN "Yes (ver. ${CUDNN_VERSION})" ELSE "Not found")
|
||||
else()
|
||||
caffe_status(" cuDNN : Disabled")
|
||||
endif()
|
||||
caffe_status("")
|
||||
endif()
|
||||
if(HAVE_PYTHON)
|
||||
caffe_status("Python:")
|
||||
caffe_status(" Interpreter :" PYTHON_EXECUTABLE THEN "${PYTHON_EXECUTABLE} (ver. ${PYTHON_VERSION_STRING})" ELSE "No")
|
||||
caffe_status(" Libraries :" PYTHONLIBS_FOUND THEN "${PYTHON_LIBRARIES} (ver ${PYTHONLIBS_VERSION_STRING})" ELSE "No")
|
||||
caffe_status(" NumPy :" NUMPY_FOUND THEN "${NUMPY_INCLUDE_DIR} (ver ${NUMPY_VERSION})" ELSE "No")
|
||||
caffe_status("")
|
||||
endif()
|
||||
if(BUILD_matlab)
|
||||
caffe_status("Matlab:")
|
||||
caffe_status(" Matlab :" HAVE_MATLAB THEN "Yes (${Matlab_mex}, ${Matlab_mexext}" ELSE "No")
|
||||
caffe_status(" Octave :" Octave_compiler THEN "Yes (${Octave_compiler})" ELSE "No")
|
||||
if(HAVE_MATLAB AND Octave_compiler)
|
||||
caffe_status(" Build mex using : ${Matlab_build_mex_using}")
|
||||
endif()
|
||||
caffe_status("")
|
||||
endif()
|
||||
if(BUILD_docs)
|
||||
caffe_status("Documentaion:")
|
||||
caffe_status(" Doxygen :" DOXYGEN_FOUND THEN "${DOXYGEN_EXECUTABLE} (${DOXYGEN_VERSION})" ELSE "No")
|
||||
caffe_status(" config_file : ${DOXYGEN_config_file}")
|
||||
|
||||
caffe_status("")
|
||||
endif()
|
||||
caffe_status("Install:")
|
||||
caffe_status(" Install path : ${CMAKE_INSTALL_PREFIX}")
|
||||
caffe_status("")
|
||||
endfunction()
|
|
@ -0,0 +1,174 @@
|
|||
################################################################################################
|
||||
# Defines global Caffe_LINK flag, This flag is required to prevent linker from excluding
|
||||
# some objects which are not addressed directly but are registered via static constructors
|
||||
macro(caffe_set_caffe_link)
|
||||
if(BUILD_SHARED_LIBS)
|
||||
set(Caffe_LINK caffe)
|
||||
else()
|
||||
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
|
||||
set(Caffe_LINK -Wl,-force_load caffe)
|
||||
elseif("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
|
||||
set(Caffe_LINK -Wl,--whole-archive caffe -Wl,--no-whole-archive)
|
||||
endif()
|
||||
endif()
|
||||
endmacro()
|
||||
################################################################################################
|
||||
# Convenient command to setup source group for IDEs that support this feature (VS, XCode)
|
||||
# Usage:
|
||||
# caffe_source_group(<group> GLOB[_RECURSE] <globbing_expression>)
|
||||
function(caffe_source_group group)
|
||||
cmake_parse_arguments(CAFFE_SOURCE_GROUP "" "" "GLOB;GLOB_RECURSE" ${ARGN})
|
||||
if(CAFFE_SOURCE_GROUP_GLOB)
|
||||
file(GLOB srcs1 ${CAFFE_SOURCE_GROUP_GLOB})
|
||||
source_group(${group} FILES ${srcs1})
|
||||
endif()
|
||||
|
||||
if(CAFFE_SOURCE_GROUP_GLOB_RECURSE)
|
||||
file(GLOB_RECURSE srcs2 ${CAFFE_SOURCE_GROUP_GLOB_RECURSE})
|
||||
source_group(${group} FILES ${srcs2})
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Collecting sources from globbing and appending to output list variable
|
||||
# Usage:
|
||||
# caffe_collect_sources(<output_variable> GLOB[_RECURSE] <globbing_expression>)
|
||||
function(caffe_collect_sources variable)
|
||||
cmake_parse_arguments(CAFFE_COLLECT_SOURCES "" "" "GLOB;GLOB_RECURSE" ${ARGN})
|
||||
if(CAFFE_COLLECT_SOURCES_GLOB)
|
||||
file(GLOB srcs1 ${CAFFE_COLLECT_SOURCES_GLOB})
|
||||
set(${variable} ${variable} ${srcs1})
|
||||
endif()
|
||||
|
||||
if(CAFFE_COLLECT_SOURCES_GLOB_RECURSE)
|
||||
file(GLOB_RECURSE srcs2 ${CAFFE_COLLECT_SOURCES_GLOB_RECURSE})
|
||||
set(${variable} ${variable} ${srcs2})
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Short command getting caffe sources (assuming standard Caffe code tree)
|
||||
# Usage:
|
||||
# caffe_pickup_caffe_sources(<root>)
|
||||
function(caffe_pickup_caffe_sources root)
|
||||
# put all files in source groups (visible as subfolder in many IDEs)
|
||||
caffe_source_group("Include" GLOB "${root}/include/caffe/*.h*")
|
||||
caffe_source_group("Include\\Util" GLOB "${root}/include/caffe/util/*.h*")
|
||||
caffe_source_group("Include" GLOB "${PROJECT_BINARY_DIR}/caffe_config.h*")
|
||||
caffe_source_group("Source" GLOB "${root}/src/caffe/*.cpp")
|
||||
caffe_source_group("Source\\Util" GLOB "${root}/src/caffe/util/*.cpp")
|
||||
caffe_source_group("Source\\Layers" GLOB "${root}/src/caffe/layers/*.cpp")
|
||||
caffe_source_group("Source\\Cuda" GLOB "${root}/src/caffe/layers/*.cu")
|
||||
caffe_source_group("Source\\Cuda" GLOB "${root}/src/caffe/util/*.cu")
|
||||
caffe_source_group("Source\\Proto" GLOB "${root}/src/caffe/proto/*.proto")
|
||||
|
||||
# source groups for test target
|
||||
caffe_source_group("Include" GLOB "${root}/include/caffe/test/test_*.h*")
|
||||
caffe_source_group("Source" GLOB "${root}/src/caffe/test/test_*.cpp")
|
||||
caffe_source_group("Source\\Cuda" GLOB "${root}/src/caffe/test/test_*.cu")
|
||||
|
||||
# collect files
|
||||
file(GLOB test_hdrs ${root}/include/caffe/test/test_*.h*)
|
||||
file(GLOB test_srcs ${root}/src/caffe/test/test_*.cpp)
|
||||
file(GLOB_RECURSE hdrs ${root}/include/caffe/*.h*)
|
||||
file(GLOB_RECURSE srcs ${root}/src/caffe/*.cpp)
|
||||
list(REMOVE_ITEM hdrs ${test_hdrs})
|
||||
list(REMOVE_ITEM srcs ${test_srcs})
|
||||
|
||||
# adding headers to make the visible in some IDEs (Qt, VS, Xcode)
|
||||
list(APPEND srcs ${hdrs} ${PROJECT_BINARY_DIR}/caffe_config.h)
|
||||
list(APPEND test_srcs ${test_hdrs})
|
||||
|
||||
# collect cuda files
|
||||
file(GLOB test_cuda ${root}/src/caffe/test/test_*.cu)
|
||||
file(GLOB_RECURSE cuda ${root}/src/caffe/*.cu)
|
||||
list(REMOVE_ITEM cuda ${test_cuda})
|
||||
|
||||
# add proto to make them editable in IDEs too
|
||||
file(GLOB_RECURSE proto_files ${root}/src/caffe/*.proto)
|
||||
list(APPEND srcs ${proto_files})
|
||||
|
||||
# convert to absolute paths
|
||||
caffe_convert_absolute_paths(srcs)
|
||||
caffe_convert_absolute_paths(cuda)
|
||||
caffe_convert_absolute_paths(test_srcs)
|
||||
caffe_convert_absolute_paths(test_cuda)
|
||||
|
||||
# propagate to parent scope
|
||||
set(srcs ${srcs} PARENT_SCOPE)
|
||||
set(cuda ${cuda} PARENT_SCOPE)
|
||||
set(test_srcs ${test_srcs} PARENT_SCOPE)
|
||||
set(test_cuda ${test_cuda} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Short command for setting default target properties
|
||||
# Usage:
|
||||
# caffe_default_properties(<target>)
|
||||
function(caffe_default_properties target)
|
||||
set_target_properties(${target} PROPERTIES
|
||||
DEBUG_POSTFIX ${Caffe_DEBUG_POSTFIX}
|
||||
ARCHIVE_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/lib"
|
||||
LIBRARY_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/lib"
|
||||
RUNTIME_OUTPUT_DIRECTORY "${PROJECT_BINARY_DIR}/bin")
|
||||
# make sure we build all external dependencies first
|
||||
if (DEFINED external_project_dependencies)
|
||||
add_dependencies(${target} ${external_project_dependencies})
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Short command for setting runtime directory for build target
|
||||
# Usage:
|
||||
# caffe_set_runtime_directory(<target> <dir>)
|
||||
function(caffe_set_runtime_directory target dir)
|
||||
set_target_properties(${target} PROPERTIES
|
||||
RUNTIME_OUTPUT_DIRECTORY "${dir}")
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Short command for setting solution folder property for target
|
||||
# Usage:
|
||||
# caffe_set_solution_folder(<target> <folder>)
|
||||
function(caffe_set_solution_folder target folder)
|
||||
if(USE_PROJECT_FOLDERS)
|
||||
set_target_properties(${target} PROPERTIES FOLDER "${folder}")
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Reads lines from input file, prepends source directory to each line and writes to output file
|
||||
# Usage:
|
||||
# caffe_configure_testdatafile(<testdatafile>)
|
||||
function(caffe_configure_testdatafile file)
|
||||
file(STRINGS ${file} __lines)
|
||||
set(result "")
|
||||
foreach(line ${__lines})
|
||||
set(result "${result}${PROJECT_SOURCE_DIR}/${line}\n")
|
||||
endforeach()
|
||||
file(WRITE ${file}.gen.cmake ${result})
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Filter out all files that are not included in selected list
|
||||
# Usage:
|
||||
# caffe_leave_only_selected_tests(<filelist_variable> <selected_list>)
|
||||
function(caffe_leave_only_selected_tests file_list)
|
||||
if(NOT ARGN)
|
||||
return() # blank list means leave all
|
||||
endif()
|
||||
string(REPLACE "," ";" __selected ${ARGN})
|
||||
list(APPEND __selected caffe_main)
|
||||
|
||||
set(result "")
|
||||
foreach(f ${${file_list}})
|
||||
get_filename_component(name ${f} NAME_WE)
|
||||
string(REGEX REPLACE "^test_" "" name ${name})
|
||||
list(FIND __selected ${name} __index)
|
||||
if(NOT __index EQUAL -1)
|
||||
list(APPEND result ${f})
|
||||
endif()
|
||||
endforeach()
|
||||
set(${file_list} ${result} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
# Config file for the Caffe package.
|
||||
#
|
||||
# Note:
|
||||
# Caffe and this config file depends on opencv,
|
||||
# so put `find_package(OpenCV)` before searching Caffe
|
||||
# via `find_package(Caffe)`. All other lib/includes
|
||||
# dependencies are hard coded in the file
|
||||
#
|
||||
# After successful configuration the following variables
|
||||
# will be defined:
|
||||
#
|
||||
# Caffe_LIBRARIES - IMPORTED targets to link against
|
||||
# (There is no Caffe_INCLUDE_DIRS and Caffe_DEFINITIONS
|
||||
# because they are specified in the IMPORTED target interface.)
|
||||
#
|
||||
# Caffe_HAVE_CUDA - signals about CUDA support
|
||||
# Caffe_HAVE_CUDNN - signals about cuDNN support
|
||||
|
||||
|
||||
# OpenCV dependency (optional)
|
||||
|
||||
if(@USE_OPENCV@)
|
||||
if(NOT OpenCV_FOUND)
|
||||
set(Caffe_OpenCV_CONFIG_PATH "@OpenCV_CONFIG_PATH@")
|
||||
if(Caffe_OpenCV_CONFIG_PATH)
|
||||
get_filename_component(Caffe_OpenCV_CONFIG_PATH ${Caffe_OpenCV_CONFIG_PATH} ABSOLUTE)
|
||||
|
||||
if(EXISTS ${Caffe_OpenCV_CONFIG_PATH} AND NOT TARGET opencv_core)
|
||||
message(STATUS "Caffe: using OpenCV config from ${Caffe_OpenCV_CONFIG_PATH}")
|
||||
include(${Caffe_OpenCV_CONFIG_PATH}/OpenCVConfig.cmake)
|
||||
endif()
|
||||
|
||||
else()
|
||||
find_package(OpenCV REQUIRED)
|
||||
endif()
|
||||
unset(Caffe_OpenCV_CONFIG_PATH)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Compute paths
|
||||
get_filename_component(Caffe_CMAKE_DIR "${CMAKE_CURRENT_LIST_FILE}" PATH)
|
||||
|
||||
# Our library dependencies
|
||||
if(NOT TARGET caffe AND NOT caffe_BINARY_DIR)
|
||||
include("${Caffe_CMAKE_DIR}/CaffeTargets.cmake")
|
||||
endif()
|
||||
|
||||
# List of IMPORTED libs created by CaffeTargets.cmake
|
||||
# These targets already specify all needed definitions and include pathes
|
||||
set(Caffe_LIBRARIES caffe)
|
||||
|
||||
# Cuda support variables
|
||||
set(Caffe_CPU_ONLY @CPU_ONLY@)
|
||||
set(Caffe_HAVE_CUDA @HAVE_CUDA@)
|
||||
set(Caffe_HAVE_CUDNN @HAVE_CUDNN@)
|
|
@ -0,0 +1,11 @@
|
|||
set(PACKAGE_VERSION "@Caffe_VERSION@")
|
||||
|
||||
# Check whether the requested PACKAGE_FIND_VERSION is compatible
|
||||
if("${PACKAGE_VERSION}" VERSION_LESS "${PACKAGE_FIND_VERSION}")
|
||||
set(PACKAGE_VERSION_COMPATIBLE FALSE)
|
||||
else()
|
||||
set(PACKAGE_VERSION_COMPATIBLE TRUE)
|
||||
if ("${PACKAGE_VERSION}" VERSION_EQUAL "${PACKAGE_FIND_VERSION}")
|
||||
set(PACKAGE_VERSION_EXACT TRUE)
|
||||
endif()
|
||||
endif()
|
|
@ -0,0 +1,19 @@
|
|||
/* Sources directory */
|
||||
#define SOURCE_FOLDER "${PROJECT_SOURCE_DIR}"
|
||||
|
||||
/* Binaries directory */
|
||||
#define BINARY_FOLDER "${PROJECT_BINARY_DIR}"
|
||||
|
||||
/* Test device */
|
||||
#define CUDA_TEST_DEVICE ${CUDA_TEST_DEVICE}
|
||||
|
||||
/* Temporary (TODO: remove) */
|
||||
#if 1
|
||||
#define CMAKE_SOURCE_DIR SOURCE_FOLDER "/src/"
|
||||
#define EXAMPLES_SOURCE_DIR BINARY_FOLDER "/examples/"
|
||||
#define CMAKE_EXT ".gen.cmake"
|
||||
#else
|
||||
#define CMAKE_SOURCE_DIR "src/"
|
||||
#define EXAMPLES_SOURCE_DIR "examples/"
|
||||
#define CMAKE_EXT ""
|
||||
#endif
|
|
@ -0,0 +1,382 @@
|
|||
################################################################################################
|
||||
# Command alias for debugging messages
|
||||
# Usage:
|
||||
# dmsg(<message>)
|
||||
function(dmsg)
|
||||
message(STATUS ${ARGN})
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Removes duplicates from list(s)
|
||||
# Usage:
|
||||
# caffe_list_unique(<list_variable> [<list_variable>] [...])
|
||||
macro(caffe_list_unique)
|
||||
foreach(__lst ${ARGN})
|
||||
if(${__lst})
|
||||
list(REMOVE_DUPLICATES ${__lst})
|
||||
endif()
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
################################################################################################
|
||||
# Clears variables from list
|
||||
# Usage:
|
||||
# caffe_clear_vars(<variables_list>)
|
||||
macro(caffe_clear_vars)
|
||||
foreach(_var ${ARGN})
|
||||
unset(${_var})
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
################################################################################################
|
||||
# Removes duplicates from string
|
||||
# Usage:
|
||||
# caffe_string_unique(<string_variable>)
|
||||
function(caffe_string_unique __string)
|
||||
if(${__string})
|
||||
set(__list ${${__string}})
|
||||
separate_arguments(__list)
|
||||
list(REMOVE_DUPLICATES __list)
|
||||
foreach(__e ${__list})
|
||||
set(__str "${__str} ${__e}")
|
||||
endforeach()
|
||||
set(${__string} ${__str} PARENT_SCOPE)
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Prints list element per line
|
||||
# Usage:
|
||||
# caffe_print_list(<list>)
|
||||
function(caffe_print_list)
|
||||
foreach(e ${ARGN})
|
||||
message(STATUS ${e})
|
||||
endforeach()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Function merging lists of compiler flags to single string.
|
||||
# Usage:
|
||||
# caffe_merge_flag_lists(out_variable <list1> [<list2>] [<list3>] ...)
|
||||
function(caffe_merge_flag_lists out_var)
|
||||
set(__result "")
|
||||
foreach(__list ${ARGN})
|
||||
foreach(__flag ${${__list}})
|
||||
string(STRIP ${__flag} __flag)
|
||||
set(__result "${__result} ${__flag}")
|
||||
endforeach()
|
||||
endforeach()
|
||||
string(STRIP ${__result} __result)
|
||||
set(${out_var} ${__result} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Converts all paths in list to absolute
|
||||
# Usage:
|
||||
# caffe_convert_absolute_paths(<list_variable>)
|
||||
function(caffe_convert_absolute_paths variable)
|
||||
set(__dlist "")
|
||||
foreach(__s ${${variable}})
|
||||
get_filename_component(__abspath ${__s} ABSOLUTE)
|
||||
list(APPEND __list ${__abspath})
|
||||
endforeach()
|
||||
set(${variable} ${__list} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Reads set of version defines from the header file
|
||||
# Usage:
|
||||
# caffe_parse_header(<file> <define1> <define2> <define3> ..)
|
||||
macro(caffe_parse_header FILENAME FILE_VAR)
|
||||
set(vars_regex "")
|
||||
set(__parnet_scope OFF)
|
||||
set(__add_cache OFF)
|
||||
foreach(name ${ARGN})
|
||||
if("${name}" STREQUAL "PARENT_SCOPE")
|
||||
set(__parnet_scope ON)
|
||||
elseif("${name}" STREQUAL "CACHE")
|
||||
set(__add_cache ON)
|
||||
elseif(vars_regex)
|
||||
set(vars_regex "${vars_regex}|${name}")
|
||||
else()
|
||||
set(vars_regex "${name}")
|
||||
endif()
|
||||
endforeach()
|
||||
if(EXISTS "${FILENAME}")
|
||||
file(STRINGS "${FILENAME}" ${FILE_VAR} REGEX "#define[ \t]+(${vars_regex})[ \t]+[0-9]+" )
|
||||
else()
|
||||
unset(${FILE_VAR})
|
||||
endif()
|
||||
foreach(name ${ARGN})
|
||||
if(NOT "${name}" STREQUAL "PARENT_SCOPE" AND NOT "${name}" STREQUAL "CACHE")
|
||||
if(${FILE_VAR})
|
||||
if(${FILE_VAR} MATCHES ".+[ \t]${name}[ \t]+([0-9]+).*")
|
||||
string(REGEX REPLACE ".+[ \t]${name}[ \t]+([0-9]+).*" "\\1" ${name} "${${FILE_VAR}}")
|
||||
else()
|
||||
set(${name} "")
|
||||
endif()
|
||||
if(__add_cache)
|
||||
set(${name} ${${name}} CACHE INTERNAL "${name} parsed from ${FILENAME}" FORCE)
|
||||
elseif(__parnet_scope)
|
||||
set(${name} "${${name}}" PARENT_SCOPE)
|
||||
endif()
|
||||
else()
|
||||
unset(${name} CACHE)
|
||||
endif()
|
||||
endif()
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
################################################################################################
|
||||
# Reads single version define from the header file and parses it
|
||||
# Usage:
|
||||
# caffe_parse_header_single_define(<library_name> <file> <define_name>)
|
||||
function(caffe_parse_header_single_define LIBNAME HDR_PATH VARNAME)
|
||||
set(${LIBNAME}_H "")
|
||||
if(EXISTS "${HDR_PATH}")
|
||||
file(STRINGS "${HDR_PATH}" ${LIBNAME}_H REGEX "^#define[ \t]+${VARNAME}[ \t]+\"[^\"]*\".*$" LIMIT_COUNT 1)
|
||||
endif()
|
||||
|
||||
if(${LIBNAME}_H)
|
||||
string(REGEX REPLACE "^.*[ \t]${VARNAME}[ \t]+\"([0-9]+).*$" "\\1" ${LIBNAME}_VERSION_MAJOR "${${LIBNAME}_H}")
|
||||
string(REGEX REPLACE "^.*[ \t]${VARNAME}[ \t]+\"[0-9]+\\.([0-9]+).*$" "\\1" ${LIBNAME}_VERSION_MINOR "${${LIBNAME}_H}")
|
||||
string(REGEX REPLACE "^.*[ \t]${VARNAME}[ \t]+\"[0-9]+\\.[0-9]+\\.([0-9]+).*$" "\\1" ${LIBNAME}_VERSION_PATCH "${${LIBNAME}_H}")
|
||||
set(${LIBNAME}_VERSION_MAJOR ${${LIBNAME}_VERSION_MAJOR} ${ARGN} PARENT_SCOPE)
|
||||
set(${LIBNAME}_VERSION_MINOR ${${LIBNAME}_VERSION_MINOR} ${ARGN} PARENT_SCOPE)
|
||||
set(${LIBNAME}_VERSION_PATCH ${${LIBNAME}_VERSION_PATCH} ${ARGN} PARENT_SCOPE)
|
||||
set(${LIBNAME}_VERSION_STRING "${${LIBNAME}_VERSION_MAJOR}.${${LIBNAME}_VERSION_MINOR}.${${LIBNAME}_VERSION_PATCH}" PARENT_SCOPE)
|
||||
|
||||
# append a TWEAK version if it exists:
|
||||
set(${LIBNAME}_VERSION_TWEAK "")
|
||||
if("${${LIBNAME}_H}" MATCHES "^.*[ \t]${VARNAME}[ \t]+\"[0-9]+\\.[0-9]+\\.[0-9]+\\.([0-9]+).*$")
|
||||
set(${LIBNAME}_VERSION_TWEAK "${CMAKE_MATCH_1}" ${ARGN} PARENT_SCOPE)
|
||||
endif()
|
||||
if(${LIBNAME}_VERSION_TWEAK)
|
||||
set(${LIBNAME}_VERSION_STRING "${${LIBNAME}_VERSION_STRING}.${${LIBNAME}_VERSION_TWEAK}" ${ARGN} PARENT_SCOPE)
|
||||
else()
|
||||
set(${LIBNAME}_VERSION_STRING "${${LIBNAME}_VERSION_STRING}" ${ARGN} PARENT_SCOPE)
|
||||
endif()
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
########################################################################################################
|
||||
# An option that the user can select. Can accept condition to control when option is available for user.
|
||||
# Usage:
|
||||
# caffe_option(<option_variable> "doc string" <initial value or boolean expression> [IF <condition>])
|
||||
function(caffe_option variable description value)
|
||||
set(__value ${value})
|
||||
set(__condition "")
|
||||
set(__varname "__value")
|
||||
foreach(arg ${ARGN})
|
||||
if(arg STREQUAL "IF" OR arg STREQUAL "if")
|
||||
set(__varname "__condition")
|
||||
else()
|
||||
list(APPEND ${__varname} ${arg})
|
||||
endif()
|
||||
endforeach()
|
||||
unset(__varname)
|
||||
if("${__condition}" STREQUAL "")
|
||||
set(__condition 2 GREATER 1)
|
||||
endif()
|
||||
|
||||
if(${__condition})
|
||||
if("${__value}" MATCHES ";")
|
||||
if(${__value})
|
||||
option(${variable} "${description}" ON)
|
||||
else()
|
||||
option(${variable} "${description}" OFF)
|
||||
endif()
|
||||
elseif(DEFINED ${__value})
|
||||
if(${__value})
|
||||
option(${variable} "${description}" ON)
|
||||
else()
|
||||
option(${variable} "${description}" OFF)
|
||||
endif()
|
||||
else()
|
||||
option(${variable} "${description}" ${__value})
|
||||
endif()
|
||||
else()
|
||||
unset(${variable} CACHE)
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Utility macro for comparing two lists. Used for CMake debugging purposes
|
||||
# Usage:
|
||||
# caffe_compare_lists(<list_variable> <list2_variable> [description])
|
||||
function(caffe_compare_lists list1 list2 desc)
|
||||
set(__list1 ${${list1}})
|
||||
set(__list2 ${${list2}})
|
||||
list(SORT __list1)
|
||||
list(SORT __list2)
|
||||
list(LENGTH __list1 __len1)
|
||||
list(LENGTH __list2 __len2)
|
||||
|
||||
if(NOT ${__len1} EQUAL ${__len2})
|
||||
message(FATAL_ERROR "Lists are not equal. ${__len1} != ${__len2}. ${desc}")
|
||||
endif()
|
||||
|
||||
foreach(__i RANGE 1 ${__len1})
|
||||
math(EXPR __index "${__i}- 1")
|
||||
list(GET __list1 ${__index} __item1)
|
||||
list(GET __list2 ${__index} __item2)
|
||||
if(NOT ${__item1} STREQUAL ${__item2})
|
||||
message(FATAL_ERROR "Lists are not equal. Differ at element ${__index}. ${desc}")
|
||||
endif()
|
||||
endforeach()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Command for disabling warnings for different platforms (see below for gcc and VisualStudio)
|
||||
# Usage:
|
||||
# caffe_warnings_disable(<CMAKE_[C|CXX]_FLAGS[_CONFIGURATION]> -Wshadow /wd4996 ..,)
|
||||
macro(caffe_warnings_disable)
|
||||
set(_flag_vars "")
|
||||
set(_msvc_warnings "")
|
||||
set(_gxx_warnings "")
|
||||
|
||||
foreach(arg ${ARGN})
|
||||
if(arg MATCHES "^CMAKE_")
|
||||
list(APPEND _flag_vars ${arg})
|
||||
elseif(arg MATCHES "^/wd")
|
||||
list(APPEND _msvc_warnings ${arg})
|
||||
elseif(arg MATCHES "^-W")
|
||||
list(APPEND _gxx_warnings ${arg})
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
if(NOT _flag_vars)
|
||||
set(_flag_vars CMAKE_C_FLAGS CMAKE_CXX_FLAGS)
|
||||
endif()
|
||||
|
||||
if(MSVC AND _msvc_warnings)
|
||||
foreach(var ${_flag_vars})
|
||||
foreach(warning ${_msvc_warnings})
|
||||
set(${var} "${${var}} ${warning}")
|
||||
endforeach()
|
||||
endforeach()
|
||||
elseif((CMAKE_COMPILER_IS_GNUCXX OR CMAKE_COMPILER_IS_CLANGXX) AND _gxx_warnings)
|
||||
foreach(var ${_flag_vars})
|
||||
foreach(warning ${_gxx_warnings})
|
||||
if(NOT warning MATCHES "^-Wno-")
|
||||
string(REPLACE "${warning}" "" ${var} "${${var}}")
|
||||
string(REPLACE "-W" "-Wno-" warning "${warning}")
|
||||
endif()
|
||||
set(${var} "${${var}} ${warning}")
|
||||
endforeach()
|
||||
endforeach()
|
||||
endif()
|
||||
caffe_clear_vars(_flag_vars _msvc_warnings _gxx_warnings)
|
||||
endmacro()
|
||||
|
||||
################################################################################################
|
||||
# Helper function get current definitions
|
||||
# Usage:
|
||||
# caffe_get_current_definitions(<definitions_variable>)
|
||||
function(caffe_get_current_definitions definitions_var)
|
||||
get_property(current_definitions DIRECTORY PROPERTY COMPILE_DEFINITIONS)
|
||||
set(result "")
|
||||
|
||||
foreach(d ${current_definitions})
|
||||
list(APPEND result -D${d})
|
||||
endforeach()
|
||||
|
||||
caffe_list_unique(result)
|
||||
set(${definitions_var} ${result} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Helper function get current includes/definitions
|
||||
# Usage:
|
||||
# caffe_get_current_cflags(<cflagslist_variable>)
|
||||
function(caffe_get_current_cflags cflags_var)
|
||||
get_property(current_includes DIRECTORY PROPERTY INCLUDE_DIRECTORIES)
|
||||
caffe_convert_absolute_paths(current_includes)
|
||||
caffe_get_current_definitions(cflags)
|
||||
|
||||
foreach(i ${current_includes})
|
||||
list(APPEND cflags "-I${i}")
|
||||
endforeach()
|
||||
|
||||
caffe_list_unique(cflags)
|
||||
set(${cflags_var} ${cflags} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Helper function to parse current linker libs into link directories, libflags and osx frameworks
|
||||
# Usage:
|
||||
# caffe_parse_linker_libs(<Caffe_LINKER_LIBS_var> <directories_var> <libflags_var> <frameworks_var>)
|
||||
function(caffe_parse_linker_libs Caffe_LINKER_LIBS_variable folders_var flags_var frameworks_var)
|
||||
|
||||
set(__unspec "")
|
||||
set(__debug "")
|
||||
set(__optimized "")
|
||||
set(__framework "")
|
||||
set(__varname "__unspec")
|
||||
|
||||
# split libs into debug, optimized, unspecified and frameworks
|
||||
foreach(list_elem ${${Caffe_LINKER_LIBS_variable}})
|
||||
if(list_elem STREQUAL "debug")
|
||||
set(__varname "__debug")
|
||||
elseif(list_elem STREQUAL "optimized")
|
||||
set(__varname "__optimized")
|
||||
elseif(list_elem MATCHES "^-framework[ \t]+([^ \t].*)")
|
||||
list(APPEND __framework -framework ${CMAKE_MATCH_1})
|
||||
else()
|
||||
list(APPEND ${__varname} ${list_elem})
|
||||
set(__varname "__unspec")
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
# attach debug or optimized libs to unspecified according to current configuration
|
||||
if(CMAKE_BUILD_TYPE MATCHES "Debug")
|
||||
set(__libs ${__unspec} ${__debug})
|
||||
else()
|
||||
set(__libs ${__unspec} ${__optimized})
|
||||
endif()
|
||||
|
||||
set(libflags "")
|
||||
set(folders "")
|
||||
|
||||
# convert linker libraries list to link flags
|
||||
foreach(lib ${__libs})
|
||||
if(TARGET ${lib})
|
||||
list(APPEND folders $<TARGET_LINKER_FILE_DIR:${lib}>)
|
||||
list(APPEND libflags -l${lib})
|
||||
elseif(lib MATCHES "^-l.*")
|
||||
list(APPEND libflags ${lib})
|
||||
elseif(IS_ABSOLUTE ${lib})
|
||||
get_filename_component(folder ${lib} PATH)
|
||||
get_filename_component(filename ${lib} NAME)
|
||||
string(REGEX REPLACE "\\.[^.]*$" "" filename_without_shortest_ext ${filename})
|
||||
|
||||
string(REGEX MATCH "^lib(.*)" __match ${filename_without_shortest_ext})
|
||||
list(APPEND libflags -l${CMAKE_MATCH_1})
|
||||
list(APPEND folders ${folder})
|
||||
else()
|
||||
message(FATAL_ERROR "Logic error. Need to update cmake script")
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
caffe_list_unique(libflags folders)
|
||||
|
||||
set(${folders_var} ${folders} PARENT_SCOPE)
|
||||
set(${flags_var} ${libflags} PARENT_SCOPE)
|
||||
set(${frameworks_var} ${__framework} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Helper function to detect Darwin version, i.e. 10.8, 10.9, 10.10, ....
|
||||
# Usage:
|
||||
# caffe_detect_darwin_version(<version_variable>)
|
||||
function(caffe_detect_darwin_version output_var)
|
||||
if(APPLE)
|
||||
execute_process(COMMAND /usr/bin/sw_vers -productVersion
|
||||
RESULT_VARIABLE __sw_vers OUTPUT_VARIABLE __sw_vers_out
|
||||
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
|
||||
set(${output_var} ${__sw_vers_out} PARENT_SCOPE)
|
||||
else()
|
||||
set(${output_var} "" PARENT_SCOPE)
|
||||
endif()
|
||||
endfunction()
|
|
@ -0,0 +1,50 @@
|
|||
|
||||
set(CMAKE_SOURCE_DIR ..)
|
||||
set(LINT_COMMAND ${CMAKE_SOURCE_DIR}/scripts/cpp_lint.py)
|
||||
set(SRC_FILE_EXTENSIONS h hpp hu c cpp cu cc)
|
||||
set(EXCLUDE_FILE_EXTENSTIONS pb.h pb.cc)
|
||||
set(LINT_DIRS include src/caffe examples tools python matlab)
|
||||
|
||||
cmake_policy(SET CMP0009 NEW) # suppress cmake warning
|
||||
|
||||
# find all files of interest
|
||||
foreach(ext ${SRC_FILE_EXTENSIONS})
|
||||
foreach(dir ${LINT_DIRS})
|
||||
file(GLOB_RECURSE FOUND_FILES ${CMAKE_SOURCE_DIR}/${dir}/*.${ext})
|
||||
set(LINT_SOURCES ${LINT_SOURCES} ${FOUND_FILES})
|
||||
endforeach()
|
||||
endforeach()
|
||||
|
||||
# find all files that should be excluded
|
||||
foreach(ext ${EXCLUDE_FILE_EXTENSTIONS})
|
||||
file(GLOB_RECURSE FOUND_FILES ${CMAKE_SOURCE_DIR}/*.${ext})
|
||||
set(EXCLUDED_FILES ${EXCLUDED_FILES} ${FOUND_FILES})
|
||||
endforeach()
|
||||
|
||||
# exclude generated pb files
|
||||
list(REMOVE_ITEM LINT_SOURCES ${EXCLUDED_FILES})
|
||||
|
||||
execute_process(
|
||||
COMMAND ${LINT_COMMAND} ${LINT_SOURCES}
|
||||
ERROR_VARIABLE LINT_OUTPUT
|
||||
ERROR_STRIP_TRAILING_WHITESPACE
|
||||
)
|
||||
|
||||
string(REPLACE "\n" ";" LINT_OUTPUT ${LINT_OUTPUT})
|
||||
|
||||
list(GET LINT_OUTPUT -1 LINT_RESULT)
|
||||
list(REMOVE_AT LINT_OUTPUT -1)
|
||||
string(REPLACE " " ";" LINT_RESULT ${LINT_RESULT})
|
||||
list(GET LINT_RESULT -1 NUM_ERRORS)
|
||||
if(NUM_ERRORS GREATER 0)
|
||||
foreach(msg ${LINT_OUTPUT})
|
||||
string(FIND ${msg} "Done" result)
|
||||
if(result LESS 0)
|
||||
message(STATUS ${msg})
|
||||
endif()
|
||||
endforeach()
|
||||
message(FATAL_ERROR "Lint found ${NUM_ERRORS} errors!")
|
||||
else()
|
||||
message(STATUS "Lint did not find any errors!")
|
||||
endif()
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
### Running an official image
|
||||
|
||||
You can run one of the automatic [builds](https://hub.docker.com/r/bvlc/caffe). E.g. for the CPU version:
|
||||
|
||||
`docker run -ti bvlc/caffe:cpu caffe --version`
|
||||
|
||||
or for GPU support (You need a CUDA 8.0 capable driver and
|
||||
[nvidia-docker](https://github.com/NVIDIA/nvidia-docker)):
|
||||
|
||||
`nvidia-docker run -ti bvlc/caffe:gpu caffe --version`
|
||||
|
||||
You might see an error about libdc1394, ignore it.
|
||||
|
||||
### Docker run options
|
||||
|
||||
By default caffe runs as root, thus any output files, e.g. snapshots, will be owned
|
||||
by root. It also runs by default in a container-private folder.
|
||||
|
||||
You can change this using flags, like user (-u), current directory, and volumes (-w and -v).
|
||||
E.g. this behaves like the usual caffe executable:
|
||||
|
||||
`docker run --rm -u $(id -u):$(id -g) -v $(pwd):$(pwd) -w $(pwd) bvlc/caffe:cpu caffe train --solver=example_solver.prototxt`
|
||||
|
||||
Containers can also be used interactively, specifying e.g. `bash` or `ipython`
|
||||
instead of `caffe`.
|
||||
|
||||
```
|
||||
docker run -ti bvlc/caffe:cpu ipython
|
||||
import caffe
|
||||
...
|
||||
```
|
||||
|
||||
The caffe build requirements are included in the container, so this can be used to
|
||||
build and run custom versions of caffe. Also, `caffe/python` is in PATH, so python
|
||||
utilities can be used directly, e.g. `draw_net.py`, `classify.py`, or `detect.py`.
|
||||
|
||||
### Building images yourself
|
||||
|
||||
Examples:
|
||||
|
||||
`docker build -t caffe:cpu cpu`
|
||||
|
||||
`docker build -t caffe:gpu gpu`
|
||||
|
||||
You can also build Caffe and run the tests in the image:
|
||||
|
||||
`docker run -ti caffe:cpu bash -c "cd /opt/caffe/build; make runtest"`
|
|
@ -0,0 +1,45 @@
|
|||
FROM ubuntu:16.04
|
||||
LABEL maintainer caffe-maint@googlegroups.com
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
cmake \
|
||||
git \
|
||||
wget \
|
||||
libatlas-base-dev \
|
||||
libboost-all-dev \
|
||||
libgflags-dev \
|
||||
libgoogle-glog-dev \
|
||||
libhdf5-serial-dev \
|
||||
libleveldb-dev \
|
||||
liblmdb-dev \
|
||||
libopencv-dev \
|
||||
libprotobuf-dev \
|
||||
libsnappy-dev \
|
||||
protobuf-compiler \
|
||||
python-dev \
|
||||
python-numpy \
|
||||
python-pip \
|
||||
python-setuptools \
|
||||
python-scipy && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV CAFFE_ROOT=/opt/caffe
|
||||
WORKDIR $CAFFE_ROOT
|
||||
|
||||
# FIXME: use ARG instead of ENV once DockerHub supports this
|
||||
ENV CLONE_TAG=rc4
|
||||
|
||||
RUN git clone -b ${CLONE_TAG} --depth 1 https://github.com/BVLC/caffe.git . && \
|
||||
pip install --upgrade pip && \
|
||||
cd python && for req in $(cat requirements.txt) pydot; do pip install $req; done && cd .. && \
|
||||
mkdir build && cd build && \
|
||||
cmake -DCPU_ONLY=1 .. && \
|
||||
make -j"$(nproc)"
|
||||
|
||||
ENV PYCAFFE_ROOT $CAFFE_ROOT/python
|
||||
ENV PYTHONPATH $PYCAFFE_ROOT:$PYTHONPATH
|
||||
ENV PATH $CAFFE_ROOT/build/tools:$PYCAFFE_ROOT:$PATH
|
||||
RUN echo "$CAFFE_ROOT/build/lib" >> /etc/ld.so.conf.d/caffe.conf && ldconfig
|
||||
|
||||
WORKDIR /workspace
|
|
@ -0,0 +1,46 @@
|
|||
FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04
|
||||
LABEL maintainer caffe-maint@googlegroups.com
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
build-essential \
|
||||
cmake \
|
||||
git \
|
||||
wget \
|
||||
libatlas-base-dev \
|
||||
libboost-all-dev \
|
||||
libgflags-dev \
|
||||
libgoogle-glog-dev \
|
||||
libhdf5-serial-dev \
|
||||
libleveldb-dev \
|
||||
liblmdb-dev \
|
||||
libopencv-dev \
|
||||
libprotobuf-dev \
|
||||
libsnappy-dev \
|
||||
protobuf-compiler \
|
||||
python-dev \
|
||||
python-numpy \
|
||||
python-pip \
|
||||
python-setuptools \
|
||||
python-scipy && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV CAFFE_ROOT=/opt/caffe
|
||||
WORKDIR $CAFFE_ROOT
|
||||
|
||||
# FIXME: use ARG instead of ENV once DockerHub supports this
|
||||
ENV CLONE_TAG=rc4
|
||||
|
||||
RUN git clone -b ${CLONE_TAG} --depth 1 https://github.com/BVLC/caffe.git . && \
|
||||
pip install --upgrade pip && \
|
||||
cd python && for req in $(cat requirements.txt) pydot; do pip install $req; done && cd .. && \
|
||||
git clone https://github.com/NVIDIA/nccl.git && cd nccl && make -j install && cd .. && rm -rf nccl && \
|
||||
mkdir build && cd build && \
|
||||
cmake -DUSE_CUDNN=1 -DUSE_NCCL=1 .. && \
|
||||
make -j"$(nproc)"
|
||||
|
||||
ENV PYCAFFE_ROOT $CAFFE_ROOT/python
|
||||
ENV PYTHONPATH $PYCAFFE_ROOT:$PYTHONPATH
|
||||
ENV PATH $CAFFE_ROOT/build/tools:$PYCAFFE_ROOT:$PATH
|
||||
RUN echo "$CAFFE_ROOT/build/lib" >> /etc/ld.so.conf.d/caffe.conf && ldconfig
|
||||
|
||||
WORKDIR /workspace
|
|
@ -0,0 +1,106 @@
|
|||
# Building docs script
|
||||
# Requirements:
|
||||
# sudo apt-get install doxygen texlive ruby-dev
|
||||
# sudo gem install jekyll execjs therubyracer
|
||||
|
||||
if(NOT BUILD_docs OR NOT DOXYGEN_FOUND)
|
||||
return()
|
||||
endif()
|
||||
|
||||
#################################################################################################
|
||||
# Gather docs from <root>/examples/**/readme.md
|
||||
function(gather_readmes_as_prebuild_cmd target gathered_dir root)
|
||||
set(full_gathered_dir ${root}/${gathered_dir})
|
||||
|
||||
file(GLOB_RECURSE readmes ${root}/examples/readme.md ${root}/examples/README.md)
|
||||
foreach(file ${readmes})
|
||||
# Only use file if it is to be included in docs.
|
||||
file(STRINGS ${file} file_lines REGEX "include_in_docs: true")
|
||||
|
||||
if(file_lines)
|
||||
# Since everything is called readme.md, rename it by its dirname.
|
||||
file(RELATIVE_PATH file ${root} ${file})
|
||||
get_filename_component(folder ${file} PATH)
|
||||
set(new_filename ${full_gathered_dir}/${folder}.md)
|
||||
|
||||
# folder value might be like <subfolder>/readme.md. That's why make directory.
|
||||
get_filename_component(new_folder ${new_filename} PATH)
|
||||
add_custom_command(TARGET ${target} PRE_BUILD
|
||||
COMMAND ${CMAKE_COMMAND} -E make_directory ${new_folder}
|
||||
COMMAND ln -sf ${root}/${file} ${new_filename}
|
||||
COMMENT "Creating symlink ${new_filename} -> ${root}/${file}"
|
||||
WORKING_DIRECTORY ${root} VERBATIM)
|
||||
endif()
|
||||
endforeach()
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
# Gather docs from examples/*.ipynb and add YAML front-matter.
|
||||
function(gather_notebooks_as_prebuild_cmd target gathered_dir root)
|
||||
set(full_gathered_dir ${root}/${gathered_dir})
|
||||
|
||||
if(NOT PYTHON_EXECUTABLE)
|
||||
message(STATUS "Python interpeter is not found. Can't include *.ipynb files in docs. Skipping...")
|
||||
return()
|
||||
endif()
|
||||
|
||||
file(GLOB_RECURSE notebooks ${root}/examples/*.ipynb)
|
||||
foreach(file ${notebooks})
|
||||
file(RELATIVE_PATH file ${root} ${file})
|
||||
set(new_filename ${full_gathered_dir}/${file})
|
||||
|
||||
get_filename_component(new_folder ${new_filename} PATH)
|
||||
add_custom_command(TARGET ${target} PRE_BUILD
|
||||
COMMAND ${CMAKE_COMMAND} -E make_directory ${new_folder}
|
||||
COMMAND ${PYTHON_EXECUTABLE} scripts/copy_notebook.py ${file} ${new_filename}
|
||||
COMMENT "Copying notebook ${file} to ${new_filename}"
|
||||
WORKING_DIRECTORY ${root} VERBATIM)
|
||||
endforeach()
|
||||
|
||||
set(${outputs_var} ${outputs} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
################################################################################################
|
||||
########################## [ Non macro part ] ##################################################
|
||||
|
||||
# Gathering is done at each 'make doc'
|
||||
file(REMOVE_RECURSE ${PROJECT_SOURCE_DIR}/docs/gathered)
|
||||
|
||||
# Doxygen config file path
|
||||
set(DOXYGEN_config_file ${PROJECT_SOURCE_DIR}/.Doxyfile CACHE FILEPATH "Doxygen config file")
|
||||
|
||||
# Adding docs target
|
||||
add_custom_target(docs COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYGEN_config_file}
|
||||
WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
|
||||
COMMENT "Launching doxygen..." VERBATIM)
|
||||
|
||||
# Gathering examples into docs subfolder
|
||||
gather_notebooks_as_prebuild_cmd(docs docs/gathered ${PROJECT_SOURCE_DIR})
|
||||
gather_readmes_as_prebuild_cmd(docs docs/gathered ${PROJECT_SOURCE_DIR})
|
||||
|
||||
# Auto detect output directory
|
||||
file(STRINGS ${DOXYGEN_config_file} config_line REGEX "OUTPUT_DIRECTORY[ \t]+=[^=].*")
|
||||
if(config_line)
|
||||
string(REGEX MATCH "OUTPUT_DIRECTORY[ \t]+=([^=].*)" __ver_check "${config_line}")
|
||||
string(STRIP ${CMAKE_MATCH_1} output_dir)
|
||||
message(STATUS "Detected Doxygen OUTPUT_DIRECTORY: ${output_dir}")
|
||||
else()
|
||||
set(output_dir ./doxygen/)
|
||||
message(STATUS "Can't find OUTPUT_DIRECTORY in doxygen config file. Try to use default: ${output_dir}")
|
||||
endif()
|
||||
|
||||
if(NOT IS_ABSOLUTE ${output_dir})
|
||||
set(output_dir ${PROJECT_SOURCE_DIR}/${output_dir})
|
||||
get_filename_component(output_dir ${output_dir} ABSOLUTE)
|
||||
endif()
|
||||
|
||||
# creates symlink in docs subfolder to code documentation built by doxygen
|
||||
add_custom_command(TARGET docs POST_BUILD VERBATIM
|
||||
COMMAND ln -sfn "${output_dir}/html" doxygen
|
||||
WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/docs
|
||||
COMMENT "Creating symlink ${PROJECT_SOURCE_DIR}/docs/doxygen -> ${output_dir}/html")
|
||||
|
||||
# for quick launch of jekyll
|
||||
add_custom_target(jekyll COMMAND jekyll serve -w -s . -d _site --port=4000
|
||||
WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}/docs
|
||||
COMMENT "Launching jekyll..." VERBATIM)
|
|
@ -0,0 +1 @@
|
|||
caffe.berkeleyvision.org
|
|
@ -0,0 +1,5 @@
|
|||
# Caffe Documentation
|
||||
|
||||
To generate the documentation, run `$CAFFE_ROOT/scripts/build_docs.sh`.
|
||||
|
||||
To push your changes to the documentation to the gh-pages branch of your or the BVLC repo, run `$CAFFE_ROOT/scripts/deploy_docs.sh <repo_name>`.
|
|
@ -0,0 +1,7 @@
|
|||
defaults:
|
||||
-
|
||||
scope:
|
||||
path: "" # an empty string here means all files in the project
|
||||
values:
|
||||
layout: "default"
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
<!doctype html>
|
||||
<html>
|
||||
<head>
|
||||
<!-- MathJax -->
|
||||
<script type="text/javascript"
|
||||
src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
|
||||
</script>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="chrome=1">
|
||||
<title>
|
||||
Caffe {% if page contains 'title' %}| {{ page.title }}{% endif %}
|
||||
</title>
|
||||
|
||||
<link rel="icon" type="image/png" href="/images/caffeine-icon.png">
|
||||
|
||||
<link rel="stylesheet" href="/stylesheets/reset.css">
|
||||
<link rel="stylesheet" href="/stylesheets/styles.css">
|
||||
<link rel="stylesheet" href="/stylesheets/pygment_trac.css">
|
||||
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
|
||||
<!--[if lt IE 9]>
|
||||
<script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
|
||||
<![endif]-->
|
||||
</head>
|
||||
<body>
|
||||
<script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
|
||||
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
|
||||
|
||||
ga('create', 'UA-46255508-1', 'daggerfs.com');
|
||||
ga('send', 'pageview');
|
||||
</script>
|
||||
<div class="wrapper">
|
||||
<header>
|
||||
<h1 class="header"><a href="/">Caffe</a></h1>
|
||||
<p class="header">
|
||||
Deep learning framework by the <a class="header name" href="http://bvlc.eecs.berkeley.edu/">BVLC</a>
|
||||
</p>
|
||||
<p class="header">
|
||||
Created by
|
||||
<br>
|
||||
<a class="header name" href="http://daggerfs.com/">Yangqing Jia</a>
|
||||
<br>
|
||||
Lead Developer
|
||||
<br>
|
||||
<a class="header name" href="http://imaginarynumber.net/">Evan Shelhamer</a>
|
||||
<ul>
|
||||
<li>
|
||||
<a class="buttons github" href="https://github.com/BVLC/caffe">View On GitHub</a>
|
||||
</li>
|
||||
</ul>
|
||||
</header>
|
||||
<section>
|
||||
|
||||
{{ content }}
|
||||
|
||||
</section>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
title: Developing and Contributing
|
||||
---
|
||||
# Development and Contributing
|
||||
|
||||
Caffe is developed with active participation of the community.<br>
|
||||
The [BVLC](http://bvlc.eecs.berkeley.edu/) brewers welcome all contributions!
|
||||
|
||||
The exact details of contributions are recorded by versioning and cited in our [acknowledgements](http://caffe.berkeleyvision.org/#acknowledgements).
|
||||
This method is impartial and always up-to-date.
|
||||
|
||||
## License
|
||||
|
||||
Caffe is licensed under the terms in [LICENSE](https://github.com/BVLC/caffe/blob/master/LICENSE). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
|
||||
|
||||
## Copyright
|
||||
|
||||
Caffe uses a shared copyright model: each contributor holds copyright over their contributions to Caffe. The project versioning records all such contribution and copyright details.
|
||||
|
||||
If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. Do not include copyright notices in files for this purpose.
|
||||
|
||||
### Documentation
|
||||
|
||||
This website, written with [Jekyll](http://jekyllrb.com/), acts as the official Caffe documentation -- simply run `scripts/build_docs.sh` and view the website at `http://0.0.0.0:4000`.
|
||||
|
||||
We prefer tutorials and examples to be documented close to where they live, in `readme.md` files.
|
||||
The `build_docs.sh` script gathers all `examples/**/readme.md` and `examples/*.ipynb` files, and makes a table of contents.
|
||||
To be included in the docs, the readme files must be annotated with [YAML front-matter](http://jekyllrb.com/docs/frontmatter/), including the flag `include_in_docs: true`.
|
||||
Similarly for IPython notebooks: simply include `"include_in_docs": true` in the `"metadata"` JSON field.
|
||||
|
||||
Other docs, such as installation guides, are written in the `docs` directory and manually linked to from the `index.md` page.
|
||||
|
||||
We strive to provide lots of usage examples, and to document all code in docstrings.
|
||||
We absolutely appreciate any contribution to this effort!
|
||||
|
||||
### Versioning
|
||||
|
||||
The `master` branch receives all new development including community contributions.
|
||||
We try to keep it in a reliable state, but it is the bleeding edge, and things do get broken every now and then.
|
||||
BVLC maintainers will periodically make releases by marking stable checkpoints as tags and maintenance branches. [Past releases](https://github.com/BVLC/caffe/releases) are catalogued online.
|
||||
|
||||
#### Issues & Pull Request Protocol
|
||||
|
||||
Post [Issues](https://github.com/BVLC/caffe/issues) to propose features, report [bugs], and discuss framework code.
|
||||
Large-scale development work is guided by [milestones], which are sets of Issues selected for bundling as releases.
|
||||
|
||||
Please note that since the core developers are largely researchers, we may work on a feature in isolation for some time before releasing it to the community, so as to claim honest academic contribution.
|
||||
We do release things as soon as a reasonable technical report may be written, and we still aim to inform the community of ongoing development through Github Issues.
|
||||
|
||||
**When you are ready to develop a feature or fixing a bug, follow this protocol**:
|
||||
|
||||
- Develop in [feature branches] with descriptive names. Branch off of the latest `master`.
|
||||
- Bring your work up-to-date by [rebasing] onto the latest `master` when done.
|
||||
(Groom your changes by [interactive rebase], if you'd like.)
|
||||
- [Pull request] your contribution to `BVLC/caffe`'s `master` branch for discussion and review.
|
||||
- Make PRs *as soon as development begins*, to let discussion guide development.
|
||||
- A PR is only ready for merge review when it is a fast-forward merge, and all code is documented, linted, and tested -- that means your PR must include tests!
|
||||
- When the PR satisfies the above properties, use comments to request maintainer review.
|
||||
|
||||
The following is a poetic presentation of the protocol in code form.
|
||||
|
||||
#### [Shelhamer's](https://github.com/shelhamer) “life of a branch in four acts”
|
||||
|
||||
Make the `feature` branch off of the latest `bvlc/master`
|
||||
|
||||
git checkout master
|
||||
git pull upstream master
|
||||
git checkout -b feature
|
||||
# do your work, make commits
|
||||
|
||||
Prepare to merge by rebasing your branch on the latest `bvlc/master`
|
||||
|
||||
# make sure master is fresh
|
||||
git checkout master
|
||||
git pull upstream master
|
||||
# rebase your branch on the tip of master
|
||||
git checkout feature
|
||||
git rebase master
|
||||
|
||||
Push your branch to pull request it into `BVLC/caffe:master`
|
||||
|
||||
git push origin feature
|
||||
# ...make pull request to master...
|
||||
|
||||
Now make a pull request! You can do this from the command line (`git pull-request -b master`) if you install [hub](https://github.com/github/hub). Hub has many other magical uses.
|
||||
|
||||
The pull request of `feature` into `master` will be a clean merge. Applause.
|
||||
|
||||
[bugs]: https://github.com/BVLC/caffe/issues?labels=bug&page=1&state=open
|
||||
[milestones]: https://github.com/BVLC/caffe/issues?milestone=1
|
||||
[Pull request]: https://help.github.com/articles/using-pull-requests
|
||||
[interactive rebase]: https://help.github.com/articles/interactive-rebase
|
||||
[rebasing]: http://git-scm.com/book/en/Git-Branching-Rebasing
|
||||
[feature branches]: https://www.atlassian.com/git/workflows#!workflow-feature-branch
|
||||
|
||||
**Historical note**: Caffe once relied on a two branch `master` and `dev` workflow.
|
||||
PRs from this time are still open but these will be merged into `master` or closed.
|
||||
|
||||
### Testing
|
||||
|
||||
Run `make runtest` to check the project tests. New code requires new tests. Pull requests that fail tests will not be accepted.
|
||||
|
||||
The `gtest` framework we use provides many additional options, which you can access by running the test binaries directly. One of the more useful options is `--gtest_filter`, which allows you to filter tests by name:
|
||||
|
||||
# run all tests with CPU in the name
|
||||
build/test/test_all.testbin --gtest_filter='*CPU*'
|
||||
|
||||
# run all tests without GPU in the name (note the leading minus sign)
|
||||
build/test/test_all.testbin --gtest_filter=-'*GPU*'
|
||||
|
||||
To get a list of all options `googletest` provides, simply pass the `--help` flag:
|
||||
|
||||
build/test/test_all.testbin --help
|
||||
|
||||
### Style
|
||||
|
||||
- **Run `make lint` to check C++ code.**
|
||||
- Wrap lines at 80 chars.
|
||||
- Follow [Google C++ style](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml) and [Google python style](http://google-styleguide.googlecode.com/svn/trunk/pyguide.html) + [PEP 8](http://legacy.python.org/dev/peps/pep-0008/).
|
||||
- Remember that “a foolish consistency is the hobgoblin of little minds,” so use your best judgement to write the clearest code for your particular case.
|
|
@ -0,0 +1,106 @@
|
|||
---
|
||||
title: Deep Learning Framework
|
||||
---
|
||||
|
||||
# Caffe
|
||||
|
||||
Caffe is a deep learning framework made with expression, speed, and modularity in mind.
|
||||
It is developed by the Berkeley Vision and Learning Center ([BVLC](http://bvlc.eecs.berkeley.edu)) and by community contributors.
|
||||
[Yangqing Jia](http://daggerfs.com) created the project during his PhD at UC Berkeley.
|
||||
Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE).
|
||||
|
||||
Check out our web image classification [demo](http://demo.caffe.berkeleyvision.org)!
|
||||
|
||||
## Why Caffe?
|
||||
|
||||
**Expressive architecture** encourages application and innovation.
|
||||
Models and optimization are defined by configuration without hard-coding.
|
||||
Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.
|
||||
|
||||
**Extensible code** fosters active development.
|
||||
In Caffe's first year, it has been forked by over 1,000 developers and had many significant changes contributed back.
|
||||
Thanks to these contributors the framework tracks the state-of-the-art in both code and models.
|
||||
|
||||
**Speed** makes Caffe perfect for research experiments and industry deployment.
|
||||
Caffe can process **over 60M images per day** with a single NVIDIA K40 GPU\*.
|
||||
That's 1 ms/image for inference and 4 ms/image for learning.
|
||||
We believe that Caffe is the fastest convnet implementation available.
|
||||
|
||||
**Community**: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia.
|
||||
Join our community of brewers on the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) and [Github](https://github.com/BVLC/caffe/).
|
||||
|
||||
<p class="footnote" markdown="1">
|
||||
\* With the ILSVRC2012-winning [SuperVision](http://www.image-net.org/challenges/LSVRC/2012/supervision.pdf) model and caching IO.
|
||||
Consult performance [details](/performance_hardware.html).
|
||||
</p>
|
||||
|
||||
## Documentation
|
||||
|
||||
- [DIY Deep Learning for Vision with Caffe](https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p)<br>
|
||||
Tutorial presentation.
|
||||
- [Tutorial Documentation](/tutorial)<br>
|
||||
Practical guide and framework reference.
|
||||
- [arXiv / ACM MM '14 paper](http://arxiv.org/abs/1408.5093)<br>
|
||||
A 4-page report for the ACM Multimedia Open Source competition (arXiv:1408.5093v1).
|
||||
- [Installation instructions](/installation.html)<br>
|
||||
Tested on Ubuntu, Red Hat, OS X.
|
||||
* [Model Zoo](/model_zoo.html)<br>
|
||||
BVLC suggests a standard distribution format for Caffe models, and provides trained models.
|
||||
* [Developing & Contributing](/development.html)<br>
|
||||
Guidelines for development and contributing to Caffe.
|
||||
* [API Documentation](/doxygen/annotated.html)<br>
|
||||
Developer documentation automagically generated from code comments.
|
||||
|
||||
### Examples
|
||||
|
||||
{% assign examples = site.pages | where:'category','example' | sort: 'priority' %}
|
||||
{% for page in examples %}
|
||||
- <div><a href="{{page.url}}">{{page.title}}</a><br>{{page.description}}</div>
|
||||
{% endfor %}
|
||||
|
||||
### Notebook Examples
|
||||
|
||||
{% assign notebooks = site.pages | where:'category','notebook' | sort: 'priority' %}
|
||||
{% for page in notebooks %}
|
||||
- <div><a href="http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/{{page.original_path}}">{{page.title}}</a><br>{{page.description}}</div>
|
||||
{% endfor %}
|
||||
|
||||
## Citing Caffe
|
||||
|
||||
Please cite Caffe in your publications if it helps your research:
|
||||
|
||||
@article{jia2014caffe,
|
||||
Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
|
||||
Journal = {arXiv preprint arXiv:1408.5093},
|
||||
Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
|
||||
Year = {2014}
|
||||
}
|
||||
|
||||
If you do publish a paper where Caffe helped your research, we encourage you to update the [publications wiki](https://github.com/BVLC/caffe/wiki/Publications).
|
||||
Citations are also tracked automatically by [Google Scholar](http://scholar.google.com/scholar?oi=bibs&hl=en&cites=17333247995453974016).
|
||||
|
||||
## Contacting Us
|
||||
|
||||
Join the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users) to ask questions and discuss methods and models. This is where we talk about usage, installation, and applications.
|
||||
|
||||
Framework development discussions and thorough bug reports are collected on [Issues](https://github.com/BVLC/caffe/issues).
|
||||
|
||||
Contact [caffe-dev](mailto:caffe-dev@googlegroups.com) if you have a confidential proposal for the framework *and the ability to act on it*.
|
||||
Requests for features, explanations, or personal help will be ignored; post to [caffe-users](https://groups.google.com/forum/#!forum/caffe-users) instead.
|
||||
|
||||
The core Caffe developers offer [consulting services](mailto:caffe-coldpress@googlegroups.com) for appropriate projects.
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
The BVLC Caffe developers would like to thank NVIDIA for GPU donation, A9 and Amazon Web Services for a research grant in support of Caffe development and reproducible research in deep learning, and BVLC PI [Trevor Darrell](http://www.eecs.berkeley.edu/~trevor/) for guidance.
|
||||
|
||||
The BVLC members who have contributed to Caffe are (alphabetical by first name):
|
||||
[Eric Tzeng](https://github.com/erictzeng), [Evan Shelhamer](http://imaginarynumber.net/), [Jeff Donahue](http://jeffdonahue.com/), [Jon Long](https://github.com/longjon), [Ross Girshick](http://www.cs.berkeley.edu/~rbg/), [Sergey Karayev](http://sergeykarayev.com/), [Sergio Guadarrama](http://www.eecs.berkeley.edu/~sguada/), and [Yangqing Jia](http://daggerfs.com/).
|
||||
|
||||
The open-source community plays an important and growing role in Caffe's development.
|
||||
Check out the Github [project pulse](https://github.com/BVLC/caffe/pulse) for recent activity and the [contributors](https://github.com/BVLC/caffe/graphs/contributors) for the full list.
|
||||
|
||||
We sincerely appreciate your interest and contributions!
|
||||
If you'd like to contribute, please read the [developing & contributing](development.html) guide.
|
||||
|
||||
Yangqing would like to give a personal thanks to the NVIDIA Academic program for providing GPUs, [Oriol Vinyals](http://www1.icsi.berkeley.edu/~vinyals/) for discussions along the journey, and BVLC PI [Trevor Darrell](http://www.eecs.berkeley.edu/~trevor/) for advice.
|
|
@ -0,0 +1,55 @@
|
|||
---
|
||||
title: "Installation: Ubuntu"
|
||||
---
|
||||
|
||||
# Ubuntu Installation
|
||||
|
||||
**General dependencies**
|
||||
|
||||
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
|
||||
sudo apt-get install --no-install-recommends libboost-all-dev
|
||||
|
||||
**CUDA**: Install by `apt-get` or the NVIDIA `.run` package.
|
||||
The NVIDIA package tends to follow more recent library and driver versions, but the installation is more manual.
|
||||
If installing from packages, install the library and latest driver separately; the driver bundled with the library is usually out-of-date.
|
||||
This can be skipped for CPU-only installation.
|
||||
|
||||
**BLAS**: install ATLAS by `sudo apt-get install libatlas-base-dev` or install OpenBLAS or MKL for better CPU performance.
|
||||
|
||||
**Python** (optional): if you use the default Python you will need to `sudo apt-get install` the `python-dev` package to have the Python headers for building the pycaffe interface.
|
||||
|
||||
**Compatibility notes, 16.04**
|
||||
|
||||
CUDA 8 is required on Ubuntu 16.04.
|
||||
|
||||
**Remaining dependencies, 14.04**
|
||||
|
||||
Everything is packaged in 14.04.
|
||||
|
||||
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
|
||||
|
||||
**Remaining dependencies, 12.04**
|
||||
|
||||
These dependencies need manual installation in 12.04.
|
||||
|
||||
# glog
|
||||
wget https://github.com/google/glog/archive/v0.3.3.tar.gz
|
||||
tar zxvf v0.3.3.tar.gz
|
||||
cd glog-0.3.3
|
||||
./configure
|
||||
make && make install
|
||||
# gflags
|
||||
wget https://github.com/schuhschuh/gflags/archive/master.zip
|
||||
unzip master.zip
|
||||
cd gflags-master
|
||||
mkdir build && cd build
|
||||
export CXXFLAGS="-fPIC" && cmake .. && make VERBOSE=1
|
||||
make && make install
|
||||
# lmdb
|
||||
git clone https://github.com/LMDB/lmdb
|
||||
cd lmdb/libraries/liblmdb
|
||||
make && make install
|
||||
|
||||
Note that glog does not compile with the most recent gflags version (2.1), so before that is resolved you will need to build with glog first.
|
||||
|
||||
Continue with [compilation](installation.html#compilation).
|
|
@ -0,0 +1,161 @@
|
|||
---
|
||||
title: "Installation: Debian"
|
||||
---
|
||||
|
||||
# Debian Installation
|
||||
|
||||
Caffe packages are available for several Debian versions, as shown in the
|
||||
following chart:
|
||||
|
||||
```
|
||||
Your Distro | CPU_ONLY | CUDA | Alias
|
||||
----------------+------------+--------+-------------------
|
||||
Debian/stable | ✘ | ✘ | Debian Jessie
|
||||
Debian/testing | ✔ | ✔ | Debian Stretch/Sid
|
||||
Debian/unstable | ✔ | ✔ | Debian Sid
|
||||
```
|
||||
|
||||
* `✘ ` You should take a look at [Ubuntu installation instruction](install_apt.html).
|
||||
|
||||
* `✔ ` You can install caffe with a single command line following this guide.
|
||||
|
||||
Last update: 2017-02-01
|
||||
|
||||
## Binary installation with APT
|
||||
|
||||
Apart from the installation methods based on source, Debian/unstable
|
||||
and Debian/testing users can install pre-compiled Caffe packages from
|
||||
the official archive.
|
||||
|
||||
Make sure that your `/etc/apt/sources.list` contains `contrib` and `non-free`
|
||||
sections if you want to install the CUDA version, for instance:
|
||||
|
||||
```
|
||||
deb http://ftp2.cn.debian.org/debian sid main contrib non-free
|
||||
```
|
||||
|
||||
Then we update APT cache and directly install Caffe. Note, the cpu version and
|
||||
the cuda version cannot coexist.
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt install [ caffe-cpu | caffe-cuda ]
|
||||
$ caffe # command line interface working
|
||||
$ python3 -c 'import caffe; print(caffe.__path__)' # python3 interface working
|
||||
```
|
||||
|
||||
These Caffe packages should work for you out of box.
|
||||
|
||||
#### Customizing caffe packages
|
||||
|
||||
Some users may need to customize the Caffe package. The way to customize
|
||||
the package is beyond this guide. Here is only a brief guide of producing
|
||||
the customized `.deb` packages.
|
||||
|
||||
Make sure that there is a `dec-src` source in your `/etc/apt/sources.list`,
|
||||
for instance:
|
||||
|
||||
```
|
||||
deb http://ftp2.cn.debian.org/debian sid main contrib non-free
|
||||
deb-src http://ftp2.cn.debian.org/debian sid main contrib non-free
|
||||
```
|
||||
|
||||
Then we build caffe deb files with the following commands:
|
||||
|
||||
```
|
||||
$ sudo apt update
|
||||
$ sudo apt install build-essential debhelper devscripts # standard package building tools
|
||||
$ sudo apt build-dep [ caffe-cpu | caffe-cuda ] # the most elegant way to pull caffe build dependencies
|
||||
$ apt source [ caffe-cpu | caffe-cuda ] # download the source tarball and extract
|
||||
$ cd caffe-XXXX
|
||||
[ ... optional, customizing caffe code/build ... ]
|
||||
$ dch --local "Modified XXX" # bump package version and write changelog
|
||||
$ debuild -B -j4 # build caffe with 4 parallel jobs (similar to make -j4)
|
||||
[ ... building ...]
|
||||
$ debc # optional, if you want to check the package contents
|
||||
$ sudo debi # optional, install the generated packages
|
||||
$ ls ../ # optional, you will see the resulting packages
|
||||
```
|
||||
|
||||
It is a BUG if the package failed to build without any change.
|
||||
The changelog will be installed at e.g. `/usr/share/doc/caffe-cpu/changelog.Debian.gz`.
|
||||
|
||||
## Source installation
|
||||
|
||||
Source installation under Debian/unstable and Debian/testing is similar to that of Ubuntu, but
|
||||
here is a more elegant way to pull caffe build dependencies:
|
||||
|
||||
```
|
||||
$ sudo apt build-dep [ caffe-cpu | caffe-cuda ]
|
||||
```
|
||||
|
||||
Note, this requires a `deb-src` entry in your `/etc/apt/sources.list`.
|
||||
|
||||
#### Compiler Combinations
|
||||
|
||||
Some users may find their favorate compiler doesn't work with CUDA.
|
||||
|
||||
```
|
||||
CXX compiler | CUDA 7.5 | CUDA 8.0 |
|
||||
-------------+------------+------------+-
|
||||
GCC-7 | ? | ? |
|
||||
GCC-6 | ✘ | ✘ |
|
||||
GCC-5 | ✔ [1] | ✔ |
|
||||
CLANG-4.0 | ? | ? |
|
||||
CLANG-3.9 | ✘ | ✘ |
|
||||
CLANG-3.8 | ? | ✔ |
|
||||
```
|
||||
|
||||
`[1]` CUDA 7.5 's `host_config.h` must be patched before working with GCC-5.
|
||||
|
||||
BTW, please forget the GCC-4.X series, since its `libstdc++` ABI is not compatible with GCC-5's.
|
||||
You may encounter failure linking GCC-4.X object files against GCC-5 libraries.
|
||||
(See https://wiki.debian.org/GCC5 )
|
||||
|
||||
## Notes
|
||||
|
||||
* Consider re-compiling OpenBLAS locally with optimization flags for sake of
|
||||
performance. This is highly recommended for any kind of production use, including
|
||||
academic research.
|
||||
|
||||
* If you are installing `caffe-cuda`, APT will automatically pull some of the
|
||||
CUDA packages and the nvidia driver packages. Please be careful if you have
|
||||
manually installed or hacked nvidia driver or CUDA toolkit or any other
|
||||
related stuff, because in this case APT may fail.
|
||||
|
||||
* Additionally, a manpage (`man caffe`) and a bash complementation script
|
||||
(`caffe <TAB><TAB>`, `caffe train <TAB><TAB>`) are provided.
|
||||
Both of the two files are still not merged into caffe master.
|
||||
|
||||
* The python interface is Python 3 version: `python3-caffe-{cpu,cuda}`.
|
||||
No plan to support python2.
|
||||
|
||||
* If you encountered any problem related to the packaging system (e.g. failed to install `caffe-*`),
|
||||
please report bug to Debian via Debian's bug tracking system. See https://www.debian.org/Bugs/ .
|
||||
Patches and suggestions are also welcome.
|
||||
|
||||
## FAQ
|
||||
|
||||
* where is caffe-cudnn?
|
||||
|
||||
CUDNN library seems not redistributable currently. If you really want the
|
||||
caffe-cudnn deb packages, the workaround is to install cudnn by yourself,
|
||||
and hack the packaging scripts, then build your customized package.
|
||||
|
||||
* I installed the CPU version. How can I switch to the CUDA version?
|
||||
|
||||
`sudo apt install caffe-cuda`, apt's dependency resolver is smart enough to deal with this.
|
||||
|
||||
* Where are the examples, the models and other documentation stuff?
|
||||
|
||||
```
|
||||
$ sudo apt install caffe-doc
|
||||
$ dpkg -L caffe-doc
|
||||
```
|
||||
|
||||
* Where can I find the Debian package status?
|
||||
|
||||
```
|
||||
https://tracker.debian.org/pkg/caffe (for the CPU_ONLY version)
|
||||
https://tracker.debian.org/pkg/caffe-contrib (for the CUDA version)
|
||||
```
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
title: "Installation: OS X"
|
||||
---
|
||||
|
||||
# OS X Installation
|
||||
|
||||
We highly recommend using the [Homebrew](http://brew.sh/) package manager.
|
||||
Ideally you could start from a clean `/usr/local` to avoid conflicts.
|
||||
In the following, we assume that you're using Anaconda Python and Homebrew.
|
||||
|
||||
**CUDA**: Install via the NVIDIA package that includes both CUDA and the bundled driver. **CUDA 7 is strongly suggested.** Older CUDA require `libstdc++` while clang++ is the default compiler and `libc++` the default standard library on OS X 10.9+. This disagreement makes it necessary to change the compilation settings for each of the dependencies. This is prone to error.
|
||||
|
||||
**Library Path**: We find that everything compiles successfully if `$LD_LIBRARY_PATH` is not set at all, and `$DYLD_FALLBACK_LIBRARY_PATH` is set to provide CUDA, Python, and other relevant libraries (e.g. `/usr/local/cuda/lib:$HOME/anaconda/lib:/usr/local/lib:/usr/lib`).
|
||||
In other `ENV` settings, things may not work as expected.
|
||||
|
||||
**General dependencies**
|
||||
|
||||
brew install -vd snappy leveldb gflags glog szip lmdb
|
||||
# need the homebrew science source for OpenCV and hdf5
|
||||
brew tap homebrew/science
|
||||
brew install hdf5 opencv
|
||||
|
||||
If using Anaconda Python, a modification to the OpenCV formula might be needed
|
||||
Do `brew edit opencv` and change the lines that look like the two lines below to exactly the two lines below.
|
||||
|
||||
-DPYTHON_LIBRARY=#{py_prefix}/lib/libpython2.7.dylib
|
||||
-DPYTHON_INCLUDE_DIR=#{py_prefix}/include/python2.7
|
||||
|
||||
If using Anaconda Python, HDF5 is bundled and the `hdf5` formula can be skipped.
|
||||
|
||||
**Remaining dependencies, with / without Python**
|
||||
|
||||
# with Python pycaffe needs dependencies built from source
|
||||
brew install --build-from-source --with-python -vd protobuf
|
||||
brew install --build-from-source -vd boost boost-python
|
||||
# without Python the usual installation suffices
|
||||
brew install protobuf boost
|
||||
|
||||
**BLAS**: already installed as the [Accelerate / vecLib Framework](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man7/Accelerate.7.html). OpenBLAS and MKL are alternatives for faster CPU computation.
|
||||
|
||||
**Python** (optional): Anaconda is the preferred Python.
|
||||
If you decide against it, please use Homebrew.
|
||||
Check that Caffe and dependencies are linking against the same, desired Python.
|
||||
|
||||
Continue with [compilation](installation.html#compilation).
|
||||
|
||||
## libstdc++ installation
|
||||
|
||||
This route is not for the faint of heart.
|
||||
For OS X 10.10 and 10.9 you should install CUDA 7 and follow the instructions above.
|
||||
If that is not an option, take a deep breath and carry on.
|
||||
|
||||
In OS X 10.9+, clang++ is the default C++ compiler and uses `libc++` as the standard library.
|
||||
However, NVIDIA CUDA (even version 6.0) currently links only with `libstdc++`.
|
||||
This makes it necessary to change the compilation settings for each of the dependencies.
|
||||
|
||||
We do this by modifying the Homebrew formulae before installing any packages.
|
||||
Make sure that Homebrew doesn't install any software dependencies in the background; all packages must be linked to `libstdc++`.
|
||||
|
||||
The prerequisite Homebrew formulae are
|
||||
|
||||
boost snappy leveldb protobuf gflags glog szip lmdb homebrew/science/opencv
|
||||
|
||||
For each of these formulas, `brew edit FORMULA`, and add the ENV definitions as shown:
|
||||
|
||||
def install
|
||||
# ADD THE FOLLOWING:
|
||||
ENV.append "CXXFLAGS", "-stdlib=libstdc++"
|
||||
ENV.append "CFLAGS", "-stdlib=libstdc++"
|
||||
ENV.append "LDFLAGS", "-stdlib=libstdc++ -lstdc++"
|
||||
# The following is necessary because libtool likes to strip LDFLAGS:
|
||||
ENV["CXX"] = "/usr/bin/clang++ -stdlib=libstdc++"
|
||||
...
|
||||
|
||||
To edit the formulae in turn, run
|
||||
|
||||
for x in snappy leveldb protobuf gflags glog szip boost boost-python lmdb homebrew/science/opencv; do brew edit $x; done
|
||||
|
||||
After this, run
|
||||
|
||||
for x in snappy leveldb gflags glog szip lmdb homebrew/science/opencv; do brew uninstall $x; brew install --build-from-source -vd $x; done
|
||||
brew uninstall protobuf; brew install --build-from-source --with-python -vd protobuf
|
||||
brew install --build-from-source -vd boost boost-python
|
||||
|
||||
If this is not done exactly right then linking errors will trouble you.
|
||||
|
||||
**Homebrew versioning** that Homebrew maintains itself as a separate git repository and making the above `brew edit FORMULA` changes will change files in your local copy of homebrew's master branch. By default, this will prevent you from updating Homebrew using `brew update`, as you will get an error message like the following:
|
||||
|
||||
$ brew update
|
||||
error: Your local changes to the following files would be overwritten by merge:
|
||||
Library/Formula/lmdb.rb
|
||||
Please, commit your changes or stash them before you can merge.
|
||||
Aborting
|
||||
Error: Failure while executing: git pull -q origin refs/heads/master:refs/remotes/origin/master
|
||||
|
||||
One solution is to commit your changes to a separate Homebrew branch, run `brew update`, and rebase your changes onto the updated master. You'll have to do this both for the main Homebrew repository in `/usr/local/` and the Homebrew science repository that contains OpenCV in `/usr/local/Library/Taps/homebrew/homebrew-science`, as follows:
|
||||
|
||||
cd /usr/local
|
||||
git checkout -b caffe
|
||||
git add .
|
||||
git commit -m "Update Caffe dependencies to use libstdc++"
|
||||
cd /usr/local/Library/Taps/homebrew/homebrew-science
|
||||
git checkout -b caffe
|
||||
git add .
|
||||
git commit -m "Update Caffe dependencies"
|
||||
|
||||
Then, whenever you want to update homebrew, switch back to the master branches, do the update, rebase the caffe branches onto master and fix any conflicts:
|
||||
|
||||
# Switch batch to homebrew master branches
|
||||
cd /usr/local
|
||||
git checkout master
|
||||
cd /usr/local/Library/Taps/homebrew/homebrew-science
|
||||
git checkout master
|
||||
|
||||
# Update homebrew; hopefully this works without errors!
|
||||
brew update
|
||||
|
||||
# Switch back to the caffe branches with the formulae that you modified earlier
|
||||
cd /usr/local
|
||||
git rebase master caffe
|
||||
# Fix any merge conflicts and commit to caffe branch
|
||||
cd /usr/local/Library/Taps/homebrew/homebrew-science
|
||||
git rebase master caffe
|
||||
# Fix any merge conflicts and commit to caffe branch
|
||||
|
||||
# Done!
|
||||
|
||||
At this point, you should be running the latest Homebrew packages and your Caffe-related modifications will remain in place.
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: "Installation: RHEL / Fedora / CentOS"
|
||||
---
|
||||
|
||||
# RHEL / Fedora / CentOS Installation
|
||||
|
||||
**General dependencies**
|
||||
|
||||
sudo yum install protobuf-devel leveldb-devel snappy-devel opencv-devel boost-devel hdf5-devel
|
||||
|
||||
**Remaining dependencies, recent OS**
|
||||
|
||||
sudo yum install gflags-devel glog-devel lmdb-devel
|
||||
|
||||
**Remaining dependencies, if not found**
|
||||
|
||||
# glog
|
||||
wget https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/google-glog/glog-0.3.3.tar.gz
|
||||
tar zxvf glog-0.3.3.tar.gz
|
||||
cd glog-0.3.3
|
||||
./configure
|
||||
make && make install
|
||||
# gflags
|
||||
wget https://github.com/schuhschuh/gflags/archive/master.zip
|
||||
unzip master.zip
|
||||
cd gflags-master
|
||||
mkdir build && cd build
|
||||
export CXXFLAGS="-fPIC" && cmake .. && make VERBOSE=1
|
||||
make && make install
|
||||
# lmdb
|
||||
git clone https://github.com/LMDB/lmdb
|
||||
cd lmdb/libraries/liblmdb
|
||||
make && make install
|
||||
|
||||
Note that glog does not compile with the most recent gflags version (2.1), so before that is resolved you will need to build with glog first.
|
||||
|
||||
**CUDA**: Install via the NVIDIA package instead of `yum` to be certain of the library and driver versions.
|
||||
Install the library and latest driver separately; the driver bundled with the library is usually out-of-date.
|
||||
+ CentOS/RHEL/Fedora:
|
||||
|
||||
**BLAS**: install ATLAS by `sudo yum install atlas-devel` or install OpenBLAS or MKL for better CPU performance. For the Makefile build, uncomment and set `BLAS_LIB` accordingly as ATLAS is usually installed under `/usr/lib[64]/atlas`).
|
||||
|
||||
**Python** (optional): if you use the default Python you will need to `sudo yum install` the `python-devel` package to have the Python headers for building the pycaffe wrapper.
|
||||
|
||||
Continue with [compilation](installation.html#compilation).
|
|
@ -0,0 +1,146 @@
|
|||
---
|
||||
title: Installation
|
||||
---
|
||||
|
||||
# Installation
|
||||
|
||||
Prior to installing, have a glance through this guide and take note of the details for your platform.
|
||||
We install and run Caffe on Ubuntu 16.04–12.04, OS X 10.11–10.8, and through Docker and AWS.
|
||||
The official Makefile and `Makefile.config` build are complemented by a [community CMake build](#cmake-build).
|
||||
|
||||
**Step-by-step Instructions**:
|
||||
|
||||
- [Docker setup](https://github.com/BVLC/caffe/tree/master/docker) *out-of-the-box brewing*
|
||||
- [Ubuntu installation](install_apt.html) *the standard platform*
|
||||
- [Debian installation](install_apt_debian.html) *install caffe with a single command*
|
||||
- [OS X installation](install_osx.html)
|
||||
- [RHEL / CentOS / Fedora installation](install_yum.html)
|
||||
- [Windows](https://github.com/BVLC/caffe/tree/windows) *see the Windows branch led by Guillaume Dumont*
|
||||
- [OpenCL](https://github.com/BVLC/caffe/tree/opencl) *see the OpenCL branch led by Fabian Tschopp*
|
||||
- [AWS AMI](https://github.com/bitfusionio/amis/tree/master/awsmrkt-bfboost-ubuntu14-cuda75-caffe) *pre-configured for AWS*
|
||||
|
||||
**Overview**:
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Compilation](#compilation)
|
||||
- [Hardware](#hardware)
|
||||
|
||||
When updating Caffe, it's best to `make clean` before re-compiling.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Caffe has several dependencies:
|
||||
|
||||
* [CUDA](https://developer.nvidia.com/cuda-zone) is required for GPU mode.
|
||||
* library version 7+ and the latest driver version are recommended, but 6.* is fine too
|
||||
* 5.5, and 5.0 are compatible but considered legacy
|
||||
* [BLAS](http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) via ATLAS, MKL, or OpenBLAS.
|
||||
* [Boost](http://www.boost.org/) >= 1.55
|
||||
* `protobuf`, `glog`, `gflags`, `hdf5`
|
||||
|
||||
Optional dependencies:
|
||||
|
||||
* [OpenCV](http://opencv.org/) >= 2.4 including 3.0
|
||||
* IO libraries: `lmdb`, `leveldb` (note: leveldb requires `snappy`)
|
||||
* cuDNN for GPU acceleration (v5)
|
||||
|
||||
Pycaffe and Matcaffe interfaces have their own natural needs.
|
||||
|
||||
* For Python Caffe: `Python 2.7` or `Python 3.3+`, `numpy (>= 1.7)`, boost-provided `boost.python`
|
||||
* For MATLAB Caffe: MATLAB with the `mex` compiler.
|
||||
|
||||
**cuDNN Caffe**: for fastest operation Caffe is accelerated by drop-in integration of [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). To speed up your Caffe models, install cuDNN then uncomment the `USE_CUDNN := 1` flag in `Makefile.config` when installing Caffe. Acceleration is automatic. The current version is cuDNN v5; older versions are supported in older Caffe.
|
||||
|
||||
**CPU-only Caffe**: for cold-brewed CPU-only Caffe uncomment the `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Caffe without CUDA. This is helpful for cloud or cluster deployment.
|
||||
|
||||
### CUDA and BLAS
|
||||
|
||||
Caffe requires the CUDA `nvcc` compiler to compile its GPU code and CUDA driver for GPU operation.
|
||||
To install CUDA, go to the [NVIDIA CUDA website](https://developer.nvidia.com/cuda-downloads) and follow installation instructions there. Install the library and the latest standalone driver separately; the driver bundled with the library is usually out-of-date. **Warning!** The 331.* CUDA driver series has a critical performance issue: do not use it.
|
||||
|
||||
For best performance, Caffe can be accelerated by [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). Register for free at the cuDNN site, install it, then continue with these installation instructions. To compile with cuDNN set the `USE_CUDNN := 1` flag set in your `Makefile.config`.
|
||||
|
||||
Caffe requires BLAS as the backend of its matrix and vector computations.
|
||||
There are several implementations of this library. The choice is yours:
|
||||
|
||||
* [ATLAS](http://math-atlas.sourceforge.net/): free, open source, and so the default for Caffe.
|
||||
* [Intel MKL](http://software.intel.com/en-us/intel-mkl): commercial and optimized for Intel CPUs, with [free](https://registrationcenter.intel.com/en/forms/?productid=2558) licenses.
|
||||
1. Install MKL.
|
||||
2. Set up MKL environment (Details: [Linux](https://software.intel.com/en-us/node/528499), [OS X](https://software.intel.com/en-us/node/528659)). Example: *source /opt/intel/mkl/bin/mklvars.sh intel64*
|
||||
3. Set `BLAS := mkl` in `Makefile.config`
|
||||
* [OpenBLAS](http://www.openblas.net/): free and open source; this optimized and parallel BLAS could require more effort to install, although it might offer a speedup.
|
||||
1. Install OpenBLAS
|
||||
2. Set `BLAS := open` in `Makefile.config`
|
||||
|
||||
### Python and/or MATLAB Caffe (optional)
|
||||
|
||||
#### Python
|
||||
|
||||
The main requirements are `numpy` and `boost.python` (provided by boost). `pandas` is useful too and needed for some examples.
|
||||
|
||||
You can install the dependencies with
|
||||
|
||||
for req in $(cat requirements.txt); do pip install $req; done
|
||||
|
||||
but we suggest first installing the [Anaconda](https://store.continuum.io/cshop/anaconda/) Python distribution, which provides most of the necessary packages, as well as the `hdf5` library dependency.
|
||||
|
||||
To import the `caffe` Python module after completing the installation, add the module directory to your `$PYTHONPATH` by `export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH` or the like. You should not import the module in the `caffe/python/caffe` directory!
|
||||
|
||||
*Caffe's Python interface works with Python 2.7. Python 3.3+ should work out of the box without protobuf support. For protobuf support please install protobuf 3.0 alpha (https://developers.google.com/protocol-buffers/). Earlier Pythons are your own adventure.*
|
||||
|
||||
#### MATLAB
|
||||
|
||||
Install MATLAB, and make sure that its `mex` is in your `$PATH`.
|
||||
|
||||
*Caffe's MATLAB interface works with versions 2015a, 2014a/b, 2013a/b, and 2012b.*
|
||||
|
||||
## Compilation
|
||||
|
||||
Caffe can be compiled with either Make or CMake. Make is officially supported while CMake is supported by the community.
|
||||
|
||||
### Compilation with Make
|
||||
|
||||
Configure the build by copying and modifying the example `Makefile.config` for your setup. The defaults should work, but uncomment the relevant lines if using Anaconda Python.
|
||||
|
||||
cp Makefile.config.example Makefile.config
|
||||
# Adjust Makefile.config (for example, if using Anaconda Python, or if cuDNN is desired)
|
||||
make all
|
||||
make test
|
||||
make runtest
|
||||
|
||||
- For CPU & GPU accelerated Caffe, no changes are needed.
|
||||
- For cuDNN acceleration using NVIDIA's proprietary cuDNN software, uncomment the `USE_CUDNN := 1` switch in `Makefile.config`. cuDNN is sometimes but not always faster than Caffe's GPU acceleration.
|
||||
- For CPU-only Caffe, uncomment `CPU_ONLY := 1` in `Makefile.config`.
|
||||
|
||||
To compile the Python and MATLAB wrappers do `make pycaffe` and `make matcaffe` respectively.
|
||||
Be sure to set your MATLAB and Python paths in `Makefile.config` first!
|
||||
|
||||
**Distribution**: run `make distribute` to create a `distribute` directory with all the Caffe headers, compiled libraries, binaries, etc. needed for distribution to other machines.
|
||||
|
||||
**Speed**: for a faster build, compile in parallel by doing `make all -j8` where 8 is the number of parallel threads for compilation (a good choice for the number of threads is the number of cores in your machine).
|
||||
|
||||
Now that you have installed Caffe, check out the [MNIST tutorial](gathered/examples/mnist.html) and the [reference ImageNet model tutorial](gathered/examples/imagenet.html).
|
||||
|
||||
### CMake Build
|
||||
|
||||
In lieu of manually editing `Makefile.config` to configure the build, Caffe offers an unofficial CMake build thanks to @Nerei, @akosiorek, and other members of the community. It requires CMake version >= 2.8.7.
|
||||
The basic steps are as follows:
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ..
|
||||
make all
|
||||
make install
|
||||
make runtest
|
||||
|
||||
See [PR #1667](https://github.com/BVLC/caffe/pull/1667) for options and details.
|
||||
|
||||
## Hardware
|
||||
|
||||
**Laboratory Tested Hardware**: Berkeley Vision runs Caffe with Titan Xs, K80s, GTX 980s, K40s, K20s, Titans, and GTX 770s including models at ImageNet/ILSVRC scale. We have not encountered any trouble in-house with devices with CUDA capability >= 3.0. All reported hardware issues thus-far have been due to GPU configuration, overheating, and the like.
|
||||
|
||||
**CUDA compute capability**: devices with compute capability <= 2.0 may have to reduce CUDA thread numbers and batch sizes due to hardware constraints. Brew with caution; we recommend compute capability >= 3.0.
|
||||
|
||||
Once installed, check your times against our [reference performance numbers](performance_hardware.html) to make sure everything is configured properly.
|
||||
|
||||
Ask hardware questions on the [caffe-users group](https://groups.google.com/forum/#!forum/caffe-users).
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
title: Model Zoo
|
||||
---
|
||||
# Caffe Model Zoo
|
||||
|
||||
Lots of researchers and engineers have made Caffe models for different tasks with all kinds of architectures and data.
|
||||
These models are learned and applied for problems ranging from simple regression, to large-scale visual classification, to Siamese networks for image similarity, to speech and robotics applications.
|
||||
|
||||
To help share these models, we introduce the model zoo framework:
|
||||
|
||||
- A standard format for packaging Caffe model info.
|
||||
- Tools to upload/download model info to/from Github Gists, and to download trained `.caffemodel` binaries.
|
||||
- A central wiki page for sharing model info Gists.
|
||||
|
||||
## Where to get trained models
|
||||
|
||||
First of all, we bundle BVLC-trained models for unrestricted, out of the box use.
|
||||
<br>
|
||||
See the [BVLC model license](#bvlc-model-license) for details.
|
||||
Each one of these can be downloaded by running `scripts/download_model_binary.py <dirname>` where `<dirname>` is specified below:
|
||||
|
||||
- **BVLC Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in [ImageNet classification with deep convolutional neural networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Jeff Donahue @jeffdonahue)
|
||||
- **BVLC AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in [ImageNet classification with deep convolutional neural networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Evan Shelhamer @shelhamer)
|
||||
- **BVLC Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn) as described by Girshick et al. in CVPR 2014. (Trained by Ross Girshick @rbgirshick)
|
||||
- **BVLC GoogLeNet** in `models/bvlc_googlenet`: GoogLeNet trained on ILSVRC 2012, almost exactly as described in [Going Deeper with Convolutions](http://arxiv.org/abs/1409.4842) by Szegedy et al. in ILSVRC 2014. (Trained by Sergio Guadarrama @sguada)
|
||||
|
||||
**Community models** made by Caffe users are posted to a publicly editable [wiki page](https://github.com/BVLC/caffe/wiki/Model-Zoo).
|
||||
These models are subject to conditions of their respective authors such as citation and license.
|
||||
Thank you for sharing your models!
|
||||
|
||||
## Model info format
|
||||
|
||||
A caffe model is distributed as a directory containing:
|
||||
|
||||
- Solver/model prototxt(s)
|
||||
- `readme.md` containing
|
||||
- YAML frontmatter
|
||||
- Caffe version used to train this model (tagged release or commit hash).
|
||||
- [optional] file URL and SHA1 of the trained `.caffemodel`.
|
||||
- [optional] github gist id.
|
||||
- Information about what data the model was trained on, modeling choices, etc.
|
||||
- License information.
|
||||
- [optional] Other helpful scripts.
|
||||
|
||||
### Hosting model info
|
||||
|
||||
Github Gist is a good format for model info distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.
|
||||
|
||||
`scripts/upload_model_to_gist.sh <dirname>` uploads non-binary files in the model directory as a Github Gist and prints the Gist ID. If `gist_id` is already part of the `<dirname>/readme.md` frontmatter, then updates existing Gist.
|
||||
|
||||
Try doing `scripts/upload_model_to_gist.sh models/bvlc_alexnet` to test the uploading (don't forget to delete the uploaded gist afterward).
|
||||
|
||||
Downloading model info is done just as easily with `scripts/download_model_from_gist.sh <gist_id> <dirname>`.
|
||||
|
||||
### Hosting trained models
|
||||
|
||||
It is up to the user where to host the `.caffemodel` file.
|
||||
We host our BVLC-provided models on our own server.
|
||||
Dropbox also works fine (tip: make sure that `?dl=1` is appended to the end of the URL).
|
||||
|
||||
`scripts/download_model_binary.py <dirname>` downloads the `.caffemodel` from the URL specified in the `<dirname>/readme.md` frontmatter and confirms SHA1.
|
||||
|
||||
## BVLC model license
|
||||
|
||||
The Caffe models bundled by the BVLC are released for unrestricted use.
|
||||
|
||||
These models are trained on data from the [ImageNet project](http://www.image-net.org/) and training data includes internet photos that may be subject to copyright.
|
||||
|
||||
Our present understanding as researchers is that there is no restriction placed on the open release of these learned model weights, since none of the original images are distributed in whole or in part.
|
||||
To the extent that the interpretation arises that weights are derivative works of the original copyright holder and they assert such a copyright, UC Berkeley makes no representations as to what use is allowed other than to consider our present release in the spirit of fair use in the academic mission of the university to disseminate knowledge and tools as broadly as possible without restriction.
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: Multi-GPU Usage, Hardware Configuration Assumptions, and Performance
|
||||
---
|
||||
|
||||
# Multi-GPU Usage
|
||||
|
||||
Currently Multi-GPU is only supported via the C/C++ paths and only for training.
|
||||
|
||||
The GPUs to be used for training can be set with the "-gpu" flag on the command line to the 'caffe' tool. e.g. "build/tools/caffe train --solver=models/bvlc_alexnet/solver.prototxt --gpu=0,1" will train on GPUs 0 and 1.
|
||||
|
||||
**NOTE**: each GPU runs the batchsize specified in your train_val.prototxt. So if you go from 1 GPU to 2 GPU, your effective batchsize will double. e.g. if your train_val.prototxt specified a batchsize of 256, if you run 2 GPUs your effective batch size is now 512. So you need to adjust the batchsize when running multiple GPUs and/or adjust your solver params, specifically learning rate.
|
||||
|
||||
# Hardware Configuration Assumptions
|
||||
|
||||
The current implementation uses a tree reduction strategy. e.g. if there are 4 GPUs in the system, 0:1, 2:3 will exchange gradients, then 0:2 (top of the tree) will exchange gradients, 0 will calculate
|
||||
updated model, 0\-\>2, and then 0\-\>1, 2\-\>3.
|
||||
|
||||
For best performance, P2P DMA access between devices is needed. Without P2P access, for example crossing PCIe root complex, data is copied through host and effective exchange bandwidth is greatly reduced.
|
||||
|
||||
Current implementation has a "soft" assumption that the devices being used are homogeneous. In practice, any devices of the same general class should work together, but performance and total size is limited by the smallest device being used. e.g. if you combine a TitanX and a GTX980, performance will be limited by the 980. Mixing vastly different levels of boards, e.g. Kepler and Fermi, is not supported.
|
||||
|
||||
"nvidia-smi topo -m" will show you the connectivity matrix. You can do P2P through PCIe bridges, but not across socket level links at this time, e.g. across CPU sockets on a multi-socket motherboard.
|
||||
|
||||
# Scaling Performance
|
||||
|
||||
Performance is **heavily** dependent on the PCIe topology of the system, the configuration of the neural network you are training, and the speed of each of the layers. Systems like the DIGITS DevBox have an optimized PCIe topology (X99-E WS chipset). In general, scaling on 2 GPUs tends to be ~1.8X on average for networks like AlexNet, CaffeNet, VGG, GoogleNet. 4 GPUs begins to have falloff in scaling. Generally with "weak scaling" where the batchsize increases with the number of GPUs you will see 3.5x scaling or so. With "strong scaling", the system can become communication bound, especially with layer performance optimizations like those in [cuDNNv3](http://nvidia.com/cudnn), and you will likely see closer to mid 2.x scaling in performance. Networks that have heavy computation compared to the number of parameters tend to have the best scaling performance.
|
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
title: Performance and Hardware Configuration
|
||||
---
|
||||
|
||||
# Performance and Hardware Configuration
|
||||
|
||||
To measure performance on different NVIDIA GPUs we use CaffeNet, the Caffe reference ImageNet model.
|
||||
|
||||
For training, each time point is 20 iterations/minibatches of 256 images for 5,120 images total. For testing, a 50,000 image validation set is classified.
|
||||
|
||||
**Acknowledgements**: BVLC members are very grateful to NVIDIA for providing several GPUs to conduct this research.
|
||||
|
||||
## NVIDIA K40
|
||||
|
||||
Performance is best with ECC off and boost clock enabled. While ECC makes a negligible difference in speed, disabling it frees ~1 GB of GPU memory.
|
||||
|
||||
Best settings with ECC off and maximum clock speed in standard Caffe:
|
||||
|
||||
* Training is 26.5 secs / 20 iterations (5,120 images)
|
||||
* Testing is 100 secs / validation set (50,000 images)
|
||||
|
||||
Best settings with Caffe + [cuDNN acceleration](http://nvidia.com/cudnn):
|
||||
|
||||
* Training is 19.2 secs / 20 iterations (5,120 images)
|
||||
* Testing is 60.7 secs / validation set (50,000 images)
|
||||
|
||||
Other settings:
|
||||
|
||||
* ECC on, max speed: training 26.7 secs / 20 iterations, test 101 secs / validation set
|
||||
* ECC on, default speed: training 31 secs / 20 iterations, test 117 secs / validation set
|
||||
* ECC off, default speed: training 31 secs / 20 iterations, test 118 secs / validation set
|
||||
|
||||
### K40 configuration tips
|
||||
|
||||
For maximum K40 performance, turn off ECC and boost the clock speed (at your own risk).
|
||||
|
||||
To turn off ECC, do
|
||||
|
||||
sudo nvidia-smi -i 0 --ecc-config=0 # repeat with -i x for each GPU ID
|
||||
|
||||
then reboot.
|
||||
|
||||
Set the "persistence" mode of the GPU settings by
|
||||
|
||||
sudo nvidia-smi -pm 1
|
||||
|
||||
and then set the clock speed with
|
||||
|
||||
sudo nvidia-smi -i 0 -ac 3004,875 # repeat with -i x for each GPU ID
|
||||
|
||||
but note that this configuration resets across driver reloading / rebooting. Include these commands in a boot script to initialize these settings. For a simple fix, add these commands to `/etc/rc.local` (on Ubuntu).
|
||||
|
||||
## NVIDIA Titan
|
||||
|
||||
Training: 26.26 secs / 20 iterations (5,120 images).
|
||||
Testing: 100 secs / validation set (50,000 images).
|
||||
|
||||
cuDNN Training: 20.25 secs / 20 iterations (5,120 images).
|
||||
cuDNN Testing: 66.3 secs / validation set (50,000 images).
|
||||
|
||||
|
||||
## NVIDIA K20
|
||||
|
||||
Training: 36.0 secs / 20 iterations (5,120 images).
|
||||
Testing: 133 secs / validation set (50,000 images).
|
||||
|
||||
## NVIDIA GTX 770
|
||||
|
||||
Training: 33.0 secs / 20 iterations (5,120 images).
|
||||
Testing: 129 secs / validation set (50,000 images).
|
||||
|
||||
cuDNN Training: 24.3 secs / 20 iterations (5,120 images).
|
||||
cuDNN Testing: 104 secs / validation set (50,000 images).
|
|
@ -0,0 +1,69 @@
|
|||
.highlight { background: #ffffff; }
|
||||
.highlight .c { color: #999988; font-style: italic } /* Comment */
|
||||
.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
|
||||
.highlight .k { font-weight: bold } /* Keyword */
|
||||
.highlight .o { font-weight: bold } /* Operator */
|
||||
.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */
|
||||
.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */
|
||||
.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */
|
||||
.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */
|
||||
.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
|
||||
.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */
|
||||
.highlight .ge { font-style: italic } /* Generic.Emph */
|
||||
.highlight .gr { color: #aa0000 } /* Generic.Error */
|
||||
.highlight .gh { color: #999999 } /* Generic.Heading */
|
||||
.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
|
||||
.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */
|
||||
.highlight .go { color: #888888 } /* Generic.Output */
|
||||
.highlight .gp { color: #555555 } /* Generic.Prompt */
|
||||
.highlight .gs { font-weight: bold } /* Generic.Strong */
|
||||
.highlight .gu { color: #800080; font-weight: bold; } /* Generic.Subheading */
|
||||
.highlight .gt { color: #aa0000 } /* Generic.Traceback */
|
||||
.highlight .kc { font-weight: bold } /* Keyword.Constant */
|
||||
.highlight .kd { font-weight: bold } /* Keyword.Declaration */
|
||||
.highlight .kn { font-weight: bold } /* Keyword.Namespace */
|
||||
.highlight .kp { font-weight: bold } /* Keyword.Pseudo */
|
||||
.highlight .kr { font-weight: bold } /* Keyword.Reserved */
|
||||
.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */
|
||||
.highlight .m { color: #009999 } /* Literal.Number */
|
||||
.highlight .s { color: #d14 } /* Literal.String */
|
||||
.highlight .na { color: #008080 } /* Name.Attribute */
|
||||
.highlight .nb { color: #0086B3 } /* Name.Builtin */
|
||||
.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */
|
||||
.highlight .no { color: #008080 } /* Name.Constant */
|
||||
.highlight .ni { color: #800080 } /* Name.Entity */
|
||||
.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */
|
||||
.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */
|
||||
.highlight .nn { color: #555555 } /* Name.Namespace */
|
||||
.highlight .nt { color: #000080 } /* Name.Tag */
|
||||
.highlight .nv { color: #008080 } /* Name.Variable */
|
||||
.highlight .ow { font-weight: bold } /* Operator.Word */
|
||||
.highlight .w { color: #bbbbbb } /* Text.Whitespace */
|
||||
.highlight .mf { color: #009999 } /* Literal.Number.Float */
|
||||
.highlight .mh { color: #009999 } /* Literal.Number.Hex */
|
||||
.highlight .mi { color: #009999 } /* Literal.Number.Integer */
|
||||
.highlight .mo { color: #009999 } /* Literal.Number.Oct */
|
||||
.highlight .sb { color: #d14 } /* Literal.String.Backtick */
|
||||
.highlight .sc { color: #d14 } /* Literal.String.Char */
|
||||
.highlight .sd { color: #d14 } /* Literal.String.Doc */
|
||||
.highlight .s2 { color: #d14 } /* Literal.String.Double */
|
||||
.highlight .se { color: #d14 } /* Literal.String.Escape */
|
||||
.highlight .sh { color: #d14 } /* Literal.String.Heredoc */
|
||||
.highlight .si { color: #d14 } /* Literal.String.Interpol */
|
||||
.highlight .sx { color: #d14 } /* Literal.String.Other */
|
||||
.highlight .sr { color: #009926 } /* Literal.String.Regex */
|
||||
.highlight .s1 { color: #d14 } /* Literal.String.Single */
|
||||
.highlight .ss { color: #990073 } /* Literal.String.Symbol */
|
||||
.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */
|
||||
.highlight .vc { color: #008080 } /* Name.Variable.Class */
|
||||
.highlight .vg { color: #008080 } /* Name.Variable.Global */
|
||||
.highlight .vi { color: #008080 } /* Name.Variable.Instance */
|
||||
.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */
|
||||
|
||||
.type-csharp .highlight .k { color: #0000FF }
|
||||
.type-csharp .highlight .kt { color: #0000FF }
|
||||
.type-csharp .highlight .nf { color: #000000; font-weight: normal }
|
||||
.type-csharp .highlight .nc { color: #2B91AF }
|
||||
.type-csharp .highlight .nn { color: #000000 }
|
||||
.type-csharp .highlight .s { color: #A31515 }
|
||||
.type-csharp .highlight .sc { color: #A31515 }
|
|
@ -0,0 +1,21 @@
|
|||
/* MeyerWeb Reset */
|
||||
|
||||
html, body, div, span, applet, object, iframe,
|
||||
h1, h2, h3, h4, h5, h6, p, blockquote, pre,
|
||||
a, abbr, acronym, address, big, cite, code,
|
||||
del, dfn, em, img, ins, kbd, q, s, samp,
|
||||
small, strike, strong, sub, sup, tt, var,
|
||||
b, u, i, center,
|
||||
dl, dt, dd, ol, ul, li,
|
||||
fieldset, form, label, legend,
|
||||
table, caption, tbody, tfoot, thead, tr, th, td,
|
||||
article, aside, canvas, details, embed,
|
||||
figure, figcaption, footer, header, hgroup,
|
||||
menu, nav, output, ruby, section, summary,
|
||||
time, mark, audio, video {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
border: 0;
|
||||
font: inherit;
|
||||
vertical-align: baseline;
|
||||
}
|
|
@ -0,0 +1,348 @@
|
|||
@import url(http://fonts.googleapis.com/css?family=PT+Serif|Open+Sans:600,400);
|
||||
|
||||
body {
|
||||
padding:10px 50px 0 0;
|
||||
font-family: 'Open Sans', sans-serif;
|
||||
font-size: 14px;
|
||||
color: #232323;
|
||||
background-color: #FBFAF7;
|
||||
margin: 0;
|
||||
line-height: 1.5rem;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
}
|
||||
|
||||
h1, h2, h3, h4, h5, h6 {
|
||||
color:#232323;
|
||||
margin:36px 0 10px;
|
||||
}
|
||||
|
||||
p, ul, ol, table, dl {
|
||||
margin:0 0 22px;
|
||||
}
|
||||
|
||||
h1, h2, h3 {
|
||||
font-family: 'PT Serif', serif;
|
||||
line-height:1.3;
|
||||
font-weight: normal;
|
||||
display: block;
|
||||
border-bottom: 1px solid #ccc;
|
||||
padding-bottom: 5px;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 30px;
|
||||
}
|
||||
|
||||
h2 {
|
||||
font-size: 24px;
|
||||
}
|
||||
|
||||
h3 {
|
||||
font-size: 18px;
|
||||
}
|
||||
|
||||
h4, h5, h6 {
|
||||
font-family: 'PT Serif', serif;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
a {
|
||||
color:#C30000;
|
||||
text-decoration:none;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
a small {
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
em {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
strong {
|
||||
font-weight:700;
|
||||
}
|
||||
|
||||
ul {
|
||||
padding-left: 25px;
|
||||
}
|
||||
|
||||
ol {
|
||||
list-style: decimal;
|
||||
padding-left: 20px;
|
||||
}
|
||||
|
||||
blockquote {
|
||||
margin: 0;
|
||||
padding: 0 0 0 20px;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
dl, dt, dd, dl p {
|
||||
font-color: #444;
|
||||
}
|
||||
|
||||
dl dt {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
dl dd {
|
||||
padding-left: 20px;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
dl p {
|
||||
padding-left: 20px;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
hr {
|
||||
border:0;
|
||||
background:#ccc;
|
||||
height:1px;
|
||||
margin:0 0 24px;
|
||||
}
|
||||
|
||||
/* Images */
|
||||
|
||||
img {
|
||||
position: relative;
|
||||
margin: 0 auto;
|
||||
max-width: 650px;
|
||||
padding: 5px;
|
||||
margin: 10px 0 32px 0;
|
||||
border: 1px solid #ccc;
|
||||
}
|
||||
|
||||
p img {
|
||||
display: inline;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
vertical-align: middle;
|
||||
text-align: center;
|
||||
border: none;
|
||||
}
|
||||
|
||||
/* Code blocks */
|
||||
code, pre {
|
||||
font-family: monospace;
|
||||
color:#000;
|
||||
font-size:12px;
|
||||
line-height: 14px;
|
||||
}
|
||||
|
||||
pre {
|
||||
padding: 6px 12px;
|
||||
background: #FDFEFB;
|
||||
border-radius:4px;
|
||||
border:1px solid #D7D8C8;
|
||||
overflow: auto;
|
||||
white-space: pre-wrap;
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
|
||||
|
||||
/* Tables */
|
||||
table {
|
||||
width:100%;
|
||||
}
|
||||
|
||||
table {
|
||||
border: 1px solid #ccc;
|
||||
margin-bottom: 32px;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
th {
|
||||
font-family: 'Open Sans', sans-serif;
|
||||
font-size: 18px;
|
||||
font-weight: normal;
|
||||
padding: 10px;
|
||||
background: #232323;
|
||||
color: #FDFEFB;
|
||||
}
|
||||
|
||||
td {
|
||||
padding: 10px;
|
||||
background: #ccc;
|
||||
}
|
||||
|
||||
|
||||
/* Wrapper */
|
||||
.wrapper {
|
||||
width:960px;
|
||||
}
|
||||
|
||||
|
||||
/* Header */
|
||||
|
||||
header {
|
||||
width:170px;
|
||||
float:left;
|
||||
position:fixed;
|
||||
padding: 12px 25px 22px 50px;
|
||||
margin: 24px 25px 0 0;
|
||||
}
|
||||
|
||||
p.header {
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
h1.header {
|
||||
font-size: 30px;
|
||||
font-weight: 300;
|
||||
line-height: 1.3em;
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
a.name {
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
header ul {
|
||||
list-style:none;
|
||||
padding:0;
|
||||
}
|
||||
|
||||
header li {
|
||||
list-style-type: none;
|
||||
width:132px;
|
||||
height:15px;
|
||||
margin-bottom: 12px;
|
||||
line-height: 1em;
|
||||
padding: 6px 6px 6px 7px;
|
||||
background: #c30000;
|
||||
border-radius:4px;
|
||||
border:1px solid #555;
|
||||
}
|
||||
|
||||
header li:hover {
|
||||
background: #dd0000;
|
||||
}
|
||||
|
||||
a.buttons {
|
||||
color: #fff;
|
||||
text-decoration: none;
|
||||
font-weight: normal;
|
||||
padding: 2px 2px 2px 22px;
|
||||
height: 30px;
|
||||
}
|
||||
|
||||
a.github {
|
||||
background: url(/images/GitHub-Mark-64px.png) no-repeat center left;
|
||||
background-size: 15%;
|
||||
}
|
||||
|
||||
/* Section - for main page content */
|
||||
|
||||
section {
|
||||
width:650px;
|
||||
float:right;
|
||||
padding-bottom:50px;
|
||||
}
|
||||
|
||||
p.footnote {
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
|
||||
/* Footer */
|
||||
|
||||
footer {
|
||||
width:170px;
|
||||
float:left;
|
||||
position:fixed;
|
||||
bottom:10px;
|
||||
padding-left: 50px;
|
||||
}
|
||||
|
||||
@media print, screen and (max-width: 960px) {
|
||||
|
||||
div.wrapper {
|
||||
width:auto;
|
||||
margin:0;
|
||||
}
|
||||
|
||||
header, section, footer {
|
||||
float:none;
|
||||
position:static;
|
||||
width:auto;
|
||||
}
|
||||
|
||||
footer {
|
||||
border-top: 1px solid #ccc;
|
||||
margin:0 84px 0 50px;
|
||||
padding:0;
|
||||
}
|
||||
|
||||
header {
|
||||
padding-right:320px;
|
||||
}
|
||||
|
||||
section {
|
||||
padding:20px 84px 20px 50px;
|
||||
margin:0 0 20px;
|
||||
}
|
||||
|
||||
header a small {
|
||||
display:inline;
|
||||
}
|
||||
|
||||
header ul {
|
||||
position:absolute;
|
||||
right:130px;
|
||||
top:84px;
|
||||
}
|
||||
}
|
||||
|
||||
@media print, screen and (max-width: 720px) {
|
||||
body {
|
||||
word-wrap:break-word;
|
||||
}
|
||||
|
||||
header {
|
||||
padding:10px 20px 0;
|
||||
margin-right: 0;
|
||||
}
|
||||
|
||||
section {
|
||||
padding:10px 0 10px 20px;
|
||||
margin:0 0 30px;
|
||||
}
|
||||
|
||||
footer {
|
||||
margin: 0 0 0 30px;
|
||||
}
|
||||
|
||||
header ul, header p.view {
|
||||
position:static;
|
||||
}
|
||||
}
|
||||
|
||||
@media print, screen and (max-width: 480px) {
|
||||
|
||||
header ul li.download {
|
||||
display:none;
|
||||
}
|
||||
|
||||
footer {
|
||||
margin: 0 0 0 20px;
|
||||
}
|
||||
|
||||
footer a{
|
||||
display:block;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@media print {
|
||||
body {
|
||||
padding:0.4in;
|
||||
font-size:12pt;
|
||||
color:#444;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Convolution
|
||||
---
|
||||
# Caffeinated Convolution
|
||||
|
||||
The Caffe strategy for convolution is to reduce the problem to matrix-matrix multiplication.
|
||||
This linear algebra computation is highly-tuned in BLAS libraries and efficiently computed on GPU devices.
|
||||
|
||||
For more details read Yangqing's [Convolution in Caffe: a memo](https://github.com/Yangqing/caffe/wiki/Convolution-in-Caffe:-a-memo).
|
||||
|
||||
As it turns out, this same reduction was independently explored in the context of conv. nets by
|
||||
|
||||
> K. Chellapilla, S. Puri, P. Simard, et al. High performance convolutional neural networks for document processing. In Tenth International Workshop on Frontiers in Handwriting Recognition, 2006.
|
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
title: Data
|
||||
---
|
||||
# Data: Ins and Outs
|
||||
|
||||
Data flows through Caffe as [Blobs](net_layer_blob.html#blob-storage-and-communication).
|
||||
Data layers load input and save output by converting to and from Blob to other formats.
|
||||
Common transformations like mean-subtraction and feature-scaling are done by data layer configuration.
|
||||
New input types are supported by developing a new data layer -- the rest of the Net follows by the modularity of the Caffe layer catalogue.
|
||||
|
||||
This data layer definition
|
||||
|
||||
layer {
|
||||
name: "mnist"
|
||||
# Data layer loads leveldb or lmdb storage DBs for high-throughput.
|
||||
type: "Data"
|
||||
# the 1st top is the data itself: the name is only convention
|
||||
top: "data"
|
||||
# the 2nd top is the ground truth: the name is only convention
|
||||
top: "label"
|
||||
# the Data layer configuration
|
||||
data_param {
|
||||
# path to the DB
|
||||
source: "examples/mnist/mnist_train_lmdb"
|
||||
# type of DB: LEVELDB or LMDB (LMDB supports concurrent reads)
|
||||
backend: LMDB
|
||||
# batch processing improves efficiency.
|
||||
batch_size: 64
|
||||
}
|
||||
# common data transformations
|
||||
transform_param {
|
||||
# feature scaling coefficient: this maps the [0, 255] MNIST data to [0, 1]
|
||||
scale: 0.00390625
|
||||
}
|
||||
}
|
||||
|
||||
loads the MNIST digits.
|
||||
|
||||
**Tops and Bottoms**: A data layer makes **top** blobs to output data to the model.
|
||||
It does not have **bottom** blobs since it takes no input.
|
||||
|
||||
**Data and Label**: a data layer has at least one top canonically named **data**.
|
||||
For ground truth a second top can be defined that is canonically named **label**.
|
||||
Both tops simply produce blobs and there is nothing inherently special about these names.
|
||||
The (data, label) pairing is a convenience for classification models.
|
||||
|
||||
**Transformations**: data preprocessing is parametrized by transformation messages within the data layer definition.
|
||||
|
||||
layer {
|
||||
name: "data"
|
||||
type: "Data"
|
||||
[...]
|
||||
transform_param {
|
||||
scale: 0.1
|
||||
mean_file_size: mean.binaryproto
|
||||
# for images in particular horizontal mirroring and random cropping
|
||||
# can be done as simple data augmentations.
|
||||
mirror: 1 # 1 = on, 0 = off
|
||||
# crop a `crop_size` x `crop_size` patch:
|
||||
# - at random during training
|
||||
# - from the center during testing
|
||||
crop_size: 227
|
||||
}
|
||||
}
|
||||
|
||||
**Prefetching**: for throughput data layers fetch the next batch of data and prepare it in the background while the Net computes the current batch.
|
||||
|
||||
**Multiple Inputs**: a Net can have multiple inputs of any number and type. Define as many data layers as needed giving each a unique name and top. Multiple inputs are useful for non-trivial ground truth: one data layer loads the actual data and the other data layer loads the ground truth in lock-step. In this arrangement both data and label can be any 4D array. Further applications of multiple inputs are found in multi-modal and sequence models. In these cases you may need to implement your own data preparation routines or a special data layer.
|
||||
|
||||
*Improvements to data processing to add formats, generality, or helper utilities are welcome!*
|
||||
|
||||
## Formats
|
||||
|
||||
Refer to the layer catalogue of [data layers](layers.html#data-layers) for close-ups on each type of data Caffe understands.
|
||||
|
||||
## Deployment Input
|
||||
|
||||
For on-the-fly computation deployment Nets define their inputs by `input` fields: these Nets then accept direct assignment of data for online or interactive computation.
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Forward and Backward for Inference and Learning
|
||||
---
|
||||
# Forward and Backward
|
||||
|
||||
The forward and backward passes are the essential computations of a [Net](net_layer_blob.html).
|
||||
|
||||
<img src="fig/forward_backward.png" alt="Forward and Backward" width="480">
|
||||
|
||||
Let's consider a simple logistic regression classifier.
|
||||
|
||||
The **forward** pass computes the output given the input for inference.
|
||||
In forward Caffe composes the computation of each layer to compute the "function" represented by the model.
|
||||
This pass goes from bottom to top.
|
||||
|
||||
<img src="fig/forward.jpg" alt="Forward pass" width="320">
|
||||
|
||||
The data $$x$$ is passed through an inner product layer for $$g(x)$$ then through a softmax for $$h(g(x))$$ and softmax loss to give $$f_W(x)$$.
|
||||
|
||||
The **backward** pass computes the gradient given the loss for learning.
|
||||
In backward Caffe reverse-composes the gradient of each layer to compute the gradient of the whole model by automatic differentiation.
|
||||
This is back-propagation.
|
||||
This pass goes from top to bottom.
|
||||
|
||||
<img src="fig/backward.jpg" alt="Backward pass" width="320">
|
||||
|
||||
The backward pass begins with the loss and computes the gradient with respect to the output $$\frac{\partial f_W}{\partial h}$$. The gradient with respect to the rest of the model is computed layer-by-layer through the chain rule. Layers with parameters, like the `INNER_PRODUCT` layer, compute the gradient with respect to their parameters $$\frac{\partial f_W}{\partial W_{\text{ip}}}$$ during the backward step.
|
||||
|
||||
These computations follow immediately from defining the model: Caffe plans and carries out the forward and backward passes for you.
|
||||
|
||||
- The `Net::Forward()` and `Net::Backward()` methods carry out the respective passes while `Layer::Forward()` and `Layer::Backward()` compute each step.
|
||||
- Every layer type has `forward_{cpu,gpu}()` and `backward_{cpu,gpu}()` methods to compute its steps according to the mode of computation. A layer may only implement CPU or GPU mode due to constraints or convenience.
|
||||
|
||||
The [Solver](solver.html) optimizes a model by first calling forward to yield the output and loss, then calling backward to generate the gradient of the model, and then incorporating the gradient into a weight update that attempts to minimize the loss. Division of labor between the Solver, Net, and Layer keep Caffe modular and open to development.
|
||||
|
||||
For the details of the forward and backward steps of Caffe's layer types, refer to the [layer catalogue](layers.html).
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
title: Caffe Tutorial
|
||||
---
|
||||
# Caffe Tutorial
|
||||
|
||||
Caffe is a deep learning framework and this tutorial explains its philosophy, architecture, and usage.
|
||||
This is a practical guide and framework introduction, so the full frontier, context, and history of deep learning cannot be covered here.
|
||||
While explanations will be given where possible, a background in machine learning and neural networks is helpful.
|
||||
|
||||
## Philosophy
|
||||
|
||||
In one sip, Caffe is brewed for
|
||||
|
||||
- Expression: models and optimizations are defined as plaintext schemas instead of code.
|
||||
- Speed: for research and industry alike speed is crucial for state-of-the-art models and massive data.
|
||||
- Modularity: new tasks and settings require flexibility and extension.
|
||||
- Openness: scientific and applied progress call for common code, reference models, and reproducibility.
|
||||
- Community: academic research, startup prototypes, and industrial applications all share strength by joint discussion and development in a BSD-2 project.
|
||||
|
||||
and these principles direct the project.
|
||||
|
||||
## Tour
|
||||
|
||||
- [Nets, Layers, and Blobs](net_layer_blob.html): the anatomy of a Caffe model.
|
||||
- [Forward / Backward](forward_backward.html): the essential computations of layered compositional models.
|
||||
- [Loss](loss.html): the task to be learned is defined by the loss.
|
||||
- [Solver](solver.html): the solver coordinates model optimization.
|
||||
- [Layer Catalogue](layers.html): the layer is the fundamental unit of modeling and computation -- Caffe's catalogue includes layers for state-of-the-art models.
|
||||
- [Interfaces](interfaces.html): command line, Python, and MATLAB Caffe.
|
||||
- [Data](data.html): how to caffeinate data for model input.
|
||||
|
||||
For a closer look at a few details:
|
||||
|
||||
- [Caffeinated Convolution](convolution.html): how Caffe computes convolutions.
|
||||
|
||||
## Deeper Learning
|
||||
|
||||
There are helpful references freely online for deep learning that complement our hands-on tutorial.
|
||||
These cover introductory and advanced material, background and history, and the latest advances.
|
||||
|
||||
The [Tutorial on Deep Learning for Vision](https://sites.google.com/site/deeplearningcvpr2014/) from CVPR '14 is a good companion tutorial for researchers.
|
||||
Once you have the framework and practice foundations from the Caffe tutorial, explore the fundamental ideas and advanced research directions in the CVPR '14 tutorial.
|
||||
|
||||
A broad introduction is given in the free online draft of [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/index.html) by Michael Nielsen. In particular the chapters on using neural nets and how backpropagation works are helpful if you are new to the subject.
|
||||
|
||||
These recent academic tutorials cover deep learning for researchers in machine learning and vision:
|
||||
|
||||
- [Deep Learning Tutorial](http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf) by Yann LeCun (NYU, Facebook) and Marc'Aurelio Ranzato (Facebook). ICML 2013 tutorial.
|
||||
- [LISA Deep Learning Tutorial](http://deeplearning.net/tutorial/deeplearning.pdf) by the LISA Lab directed by Yoshua Bengio (U. Montréal).
|
||||
|
||||
For an exposition of neural networks in circuits and code, check out [Understanding Neural Networks from a Programmer's Perspective](http://karpathy.github.io/neuralnets/) by Andrej Karpathy (Stanford).
|
|
@ -0,0 +1,286 @@
|
|||
---
|
||||
title: Interfaces
|
||||
---
|
||||
# Interfaces
|
||||
|
||||
Caffe has command line, Python, and MATLAB interfaces for day-to-day usage, interfacing with research code, and rapid prototyping. While Caffe is a C++ library at heart and it exposes a modular interface for development, not every occasion calls for custom compilation. The cmdcaffe, pycaffe, and matcaffe interfaces are here for you.
|
||||
|
||||
## Command Line
|
||||
|
||||
The command line interface -- cmdcaffe -- is the `caffe` tool for model training, scoring, and diagnostics. Run `caffe` without any arguments for help. This tool and others are found in caffe/build/tools. (The following example calls require completing the LeNet / MNIST example first.)
|
||||
|
||||
**Training**: `caffe train` learns models from scratch, resumes learning from saved snapshots, and fine-tunes models to new data and tasks:
|
||||
|
||||
* All training requires a solver configuration through the `-solver solver.prototxt` argument.
|
||||
* Resuming requires the `-snapshot model_iter_1000.solverstate` argument to load the solver snapshot.
|
||||
* Fine-tuning requires the `-weights model.caffemodel` argument for the model initialization.
|
||||
|
||||
For example, you can run:
|
||||
|
||||
# train LeNet
|
||||
caffe train -solver examples/mnist/lenet_solver.prototxt
|
||||
# train on GPU 2
|
||||
caffe train -solver examples/mnist/lenet_solver.prototxt -gpu 2
|
||||
# resume training from the half-way point snapshot
|
||||
caffe train -solver examples/mnist/lenet_solver.prototxt -snapshot examples/mnist/lenet_iter_5000.solverstate
|
||||
|
||||
For a full example of fine-tuning, see examples/finetuning_on_flickr_style, but the training call alone is
|
||||
|
||||
# fine-tune CaffeNet model weights for style recognition
|
||||
caffe train -solver examples/finetuning_on_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel
|
||||
|
||||
**Testing**: `caffe test` scores models by running them in the test phase and reports the net output as its score. The net architecture must be properly defined to output an accuracy measure or loss as its output. The per-batch score is reported and then the grand average is reported last.
|
||||
|
||||
# score the learned LeNet model on the validation set as defined in the
|
||||
# model architeture lenet_train_test.prototxt
|
||||
caffe test -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000.caffemodel -gpu 0 -iterations 100
|
||||
|
||||
**Benchmarking**: `caffe time` benchmarks model execution layer-by-layer through timing and synchronization. This is useful to check system performance and measure relative execution times for models.
|
||||
|
||||
# (These example calls require you complete the LeNet / MNIST example first.)
|
||||
# time LeNet training on CPU for 10 iterations
|
||||
caffe time -model examples/mnist/lenet_train_test.prototxt -iterations 10
|
||||
# time LeNet training on GPU for the default 50 iterations
|
||||
caffe time -model examples/mnist/lenet_train_test.prototxt -gpu 0
|
||||
# time a model architecture with the given weights on the first GPU for 10 iterations
|
||||
caffe time -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000.caffemodel -gpu 0 -iterations 10
|
||||
|
||||
**Diagnostics**: `caffe device_query` reports GPU details for reference and checking device ordinals for running on a given device in multi-GPU machines.
|
||||
|
||||
# query the first device
|
||||
caffe device_query -gpu 0
|
||||
|
||||
**Parallelism**: the `-gpu` flag to the `caffe` tool can take a comma separated list of IDs to run on multiple GPUs. A solver and net will be instantiated for each GPU so the batch size is effectively multiplied by the number of GPUs. To reproduce single GPU training, reduce the batch size in the network definition accordingly.
|
||||
|
||||
# train on GPUs 0 & 1 (doubling the batch size)
|
||||
caffe train -solver examples/mnist/lenet_solver.prototxt -gpu 0,1
|
||||
# train on all GPUs (multiplying batch size by number of devices)
|
||||
caffe train -solver examples/mnist/lenet_solver.prototxt -gpu all
|
||||
|
||||
## Python
|
||||
|
||||
The Python interface -- pycaffe -- is the `caffe` module and its scripts in caffe/python. `import caffe` to load models, do forward and backward, handle IO, visualize networks, and even instrument model solving. All model data, derivatives, and parameters are exposed for reading and writing.
|
||||
|
||||
- `caffe.Net` is the central interface for loading, configuring, and running models. `caffe.Classifier` and `caffe.Detector` provide convenience interfaces for common tasks.
|
||||
- `caffe.SGDSolver` exposes the solving interface.
|
||||
- `caffe.io` handles input / output with preprocessing and protocol buffers.
|
||||
- `caffe.draw` visualizes network architectures.
|
||||
- Caffe blobs are exposed as numpy ndarrays for ease-of-use and efficiency.
|
||||
|
||||
Tutorial IPython notebooks are found in caffe/examples: do `ipython notebook caffe/examples` to try them. For developer reference docstrings can be found throughout the code.
|
||||
|
||||
Compile pycaffe by `make pycaffe`.
|
||||
Add the module directory to your `$PYTHONPATH` by `export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH` or the like for `import caffe`.
|
||||
|
||||
## MATLAB
|
||||
|
||||
The MATLAB interface -- matcaffe -- is the `caffe` package in caffe/matlab in which you can integrate Caffe in your Matlab code.
|
||||
|
||||
In MatCaffe, you can
|
||||
|
||||
* Creating multiple Nets in Matlab
|
||||
* Do forward and backward computation
|
||||
* Access any layer within a network, and any parameter blob in a layer
|
||||
* Get and set data or diff to any blob within a network, not restricting to input blobs or output blobs
|
||||
* Save a network's parameters to file, and load parameters from file
|
||||
* Reshape a blob and reshape a network
|
||||
* Edit network parameter and do network surgery
|
||||
* Create multiple Solvers in Matlab for training
|
||||
* Resume training from solver snapshots
|
||||
* Access train net and test nets in a solver
|
||||
* Run for a certain number of iterations and give back control to Matlab
|
||||
* Intermingle arbitrary Matlab code with gradient steps
|
||||
|
||||
An ILSVRC image classification demo is in caffe/matlab/demo/classification_demo.m (you need to download BVLC CaffeNet from [Model Zoo](http://caffe.berkeleyvision.org/model_zoo.html) to run it).
|
||||
|
||||
### Build MatCaffe
|
||||
|
||||
Build MatCaffe with `make all matcaffe`. After that, you may test it using `make mattest`.
|
||||
|
||||
Common issue: if you run into error messages like `libstdc++.so.6:version 'GLIBCXX_3.4.15' not found` during `make mattest`, then it usually means that your Matlab's runtime libraries do not match your compile-time libraries. You may need to do the following before you start Matlab:
|
||||
|
||||
export LD_LIBRARY_PATH=/opt/intel/mkl/lib/intel64:/usr/local/cuda/lib64
|
||||
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6
|
||||
|
||||
Or the equivalent based on where things are installed on your system, and do `make mattest` again to see if the issue is fixed. Note: this issue is sometimes more complicated since during its startup Matlab may overwrite your `LD_LIBRARY_PATH` environment variable. You can run `!ldd ./matlab/+caffe/private/caffe_.mexa64` (the mex extension may differ on your system) in Matlab to see its runtime libraries, and preload your compile-time libraries by exporting them to your `LD_PRELOAD` environment variable.
|
||||
|
||||
After successful building and testing, add this package to Matlab search PATH by starting `matlab` from caffe root folder and running the following commands in Matlab command window.
|
||||
|
||||
addpath ./matlab
|
||||
|
||||
You can save your Matlab search PATH by running `savepath` so that you don't have to run the command above again every time you use MatCaffe.
|
||||
|
||||
### Use MatCaffe
|
||||
|
||||
MatCaffe is very similar to PyCaffe in usage.
|
||||
|
||||
Examples below shows detailed usages and assumes you have downloaded BVLC CaffeNet from [Model Zoo](http://caffe.berkeleyvision.org/model_zoo.html) and started `matlab` from caffe root folder.
|
||||
|
||||
model = './models/bvlc_reference_caffenet/deploy.prototxt';
|
||||
weights = './models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel';
|
||||
|
||||
#### Set mode and device
|
||||
|
||||
**Mode and device should always be set BEFORE you create a net or a solver.**
|
||||
|
||||
Use CPU:
|
||||
|
||||
caffe.set_mode_cpu();
|
||||
|
||||
Use GPU and specify its gpu_id:
|
||||
|
||||
caffe.set_mode_gpu();
|
||||
caffe.set_device(gpu_id);
|
||||
|
||||
#### Create a network and access its layers and blobs
|
||||
|
||||
Create a network:
|
||||
|
||||
net = caffe.Net(model, weights, 'test'); % create net and load weights
|
||||
|
||||
Or
|
||||
|
||||
net = caffe.Net(model, 'test'); % create net but not load weights
|
||||
net.copy_from(weights); % load weights
|
||||
|
||||
which creates `net` object as
|
||||
|
||||
Net with properties:
|
||||
|
||||
layer_vec: [1x23 caffe.Layer]
|
||||
blob_vec: [1x15 caffe.Blob]
|
||||
inputs: {'data'}
|
||||
outputs: {'prob'}
|
||||
name2layer_index: [23x1 containers.Map]
|
||||
name2blob_index: [15x1 containers.Map]
|
||||
layer_names: {23x1 cell}
|
||||
blob_names: {15x1 cell}
|
||||
|
||||
The two `containers.Map` objects are useful to find the index of a layer or a blob by its name.
|
||||
|
||||
You have access to every blob in this network. To fill blob 'data' with all ones:
|
||||
|
||||
net.blobs('data').set_data(ones(net.blobs('data').shape));
|
||||
|
||||
To multiply all values in blob 'data' by 10:
|
||||
|
||||
net.blobs('data').set_data(net.blobs('data').get_data() * 10);
|
||||
|
||||
**Be aware that since Matlab is 1-indexed and column-major, the usual 4 blob dimensions in Matlab are `[width, height, channels, num]`, and `width` is the fastest dimension. Also be aware that images are in BGR channels.** Also, Caffe uses single-precision float data. If your data is not single, `set_data` will automatically convert it to single.
|
||||
|
||||
You also have access to every layer, so you can do network surgery. For example, to multiply conv1 parameters by 10:
|
||||
|
||||
net.params('conv1', 1).set_data(net.params('conv1', 1).get_data() * 10); % set weights
|
||||
net.params('conv1', 2).set_data(net.params('conv1', 2).get_data() * 10); % set bias
|
||||
|
||||
Alternatively, you can use
|
||||
|
||||
net.layers('conv1').params(1).set_data(net.layers('conv1').params(1).get_data() * 10);
|
||||
net.layers('conv1').params(2).set_data(net.layers('conv1').params(2).get_data() * 10);
|
||||
|
||||
To save the network you just modified:
|
||||
|
||||
net.save('my_net.caffemodel');
|
||||
|
||||
To get a layer's type (string):
|
||||
|
||||
layer_type = net.layers('conv1').type;
|
||||
|
||||
#### Forward and backward
|
||||
|
||||
Forward pass can be done using `net.forward` or `net.forward_prefilled`. Function `net.forward` takes in a cell array of N-D arrays containing data of input blob(s) and outputs a cell array containing data from output blob(s). Function `net.forward_prefilled` uses existing data in input blob(s) during forward pass, takes no input and produces no output. After creating some data for input blobs like `data = rand(net.blobs('data').shape);` you can run
|
||||
|
||||
res = net.forward({data});
|
||||
prob = res{1};
|
||||
|
||||
Or
|
||||
|
||||
net.blobs('data').set_data(data);
|
||||
net.forward_prefilled();
|
||||
prob = net.blobs('prob').get_data();
|
||||
|
||||
Backward is similar using `net.backward` or `net.backward_prefilled` and replacing `get_data` and `set_data` with `get_diff` and `set_diff`. After creating some gradients for output blobs like `prob_diff = rand(net.blobs('prob').shape);` you can run
|
||||
|
||||
res = net.backward({prob_diff});
|
||||
data_diff = res{1};
|
||||
|
||||
Or
|
||||
|
||||
net.blobs('prob').set_diff(prob_diff);
|
||||
net.backward_prefilled();
|
||||
data_diff = net.blobs('data').get_diff();
|
||||
|
||||
**However, the backward computation above doesn't get correct results, because Caffe decides that the network does not need backward computation. To get correct backward results, you need to set `'force_backward: true'` in your network prototxt.**
|
||||
|
||||
After performing forward or backward pass, you can also get the data or diff in internal blobs. For example, to extract pool5 features after forward pass:
|
||||
|
||||
pool5_feat = net.blobs('pool5').get_data();
|
||||
|
||||
#### Reshape
|
||||
|
||||
Assume you want to run 1 image at a time instead of 10:
|
||||
|
||||
net.blobs('data').reshape([227 227 3 1]); % reshape blob 'data'
|
||||
net.reshape();
|
||||
|
||||
Then the whole network is reshaped, and now `net.blobs('prob').shape` should be `[1000 1]`;
|
||||
|
||||
#### Training
|
||||
|
||||
Assume you have created training and validation lmdbs following our [ImageNET Tutorial](http://caffe.berkeleyvision.org/gathered/examples/imagenet.html), to create a solver and train on ILSVRC 2012 classification dataset:
|
||||
|
||||
solver = caffe.Solver('./models/bvlc_reference_caffenet/solver.prototxt');
|
||||
|
||||
which creates `solver` object as
|
||||
|
||||
Solver with properties:
|
||||
|
||||
net: [1x1 caffe.Net]
|
||||
test_nets: [1x1 caffe.Net]
|
||||
|
||||
To train:
|
||||
|
||||
solver.solve();
|
||||
|
||||
Or train for only 1000 iterations (so that you can do something to its net before training more iterations)
|
||||
|
||||
solver.step(1000);
|
||||
|
||||
To get iteration number:
|
||||
|
||||
iter = solver.iter();
|
||||
|
||||
To get its network:
|
||||
|
||||
train_net = solver.net;
|
||||
test_net = solver.test_nets(1);
|
||||
|
||||
To resume from a snapshot "your_snapshot.solverstate":
|
||||
|
||||
solver.restore('your_snapshot.solverstate');
|
||||
|
||||
#### Input and output
|
||||
|
||||
`caffe.io` class provides basic input functions `load_image` and `read_mean`. For example, to read ILSVRC 2012 mean file (assume you have downloaded imagenet example auxiliary files by running `./data/ilsvrc12/get_ilsvrc_aux.sh`):
|
||||
|
||||
mean_data = caffe.io.read_mean('./data/ilsvrc12/imagenet_mean.binaryproto');
|
||||
|
||||
To read Caffe's example image and resize to `[width, height]` and suppose we want `width = 256; height = 256;`
|
||||
|
||||
im_data = caffe.io.load_image('./examples/images/cat.jpg');
|
||||
im_data = imresize(im_data, [width, height]); % resize using Matlab's imresize
|
||||
|
||||
**Keep in mind that `width` is the fastest dimension and channels are BGR, which is different from the usual way that Matlab stores an image.** If you don't want to use `caffe.io.load_image` and prefer to load an image by yourself, you can do
|
||||
|
||||
im_data = imread('./examples/images/cat.jpg'); % read image
|
||||
im_data = im_data(:, :, [3, 2, 1]); % convert from RGB to BGR
|
||||
im_data = permute(im_data, [2, 1, 3]); % permute width and height
|
||||
im_data = single(im_data); % convert to single precision
|
||||
|
||||
Also, you may take a look at caffe/matlab/demo/classification_demo.m to see how to prepare input by taking crops from an image.
|
||||
|
||||
We show in caffe/matlab/hdf5creation how to read and write HDF5 data with Matlab. We do not provide extra functions for data output as Matlab itself is already quite powerful in output.
|
||||
|
||||
#### Clear nets and solvers
|
||||
|
||||
Call `caffe.reset_all()` to clear all solvers and stand-alone nets you have created.
|
|
@ -0,0 +1,135 @@
|
|||
---
|
||||
title: Layer Catalogue
|
||||
---
|
||||
|
||||
# Layers
|
||||
|
||||
To create a Caffe model you need to define the model architecture in a protocol buffer definition file (prototxt).
|
||||
|
||||
Caffe layers and their parameters are defined in the protocol buffer definitions for the project in [caffe.proto](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto).
|
||||
|
||||
## Data Layers
|
||||
|
||||
Data enters Caffe through data layers: they lie at the bottom of nets. Data can come from efficient databases (LevelDB or LMDB), directly from memory, or, when efficiency is not critical, from files on disk in HDF5 or common image formats.
|
||||
|
||||
Common input preprocessing (mean subtraction, scaling, random cropping, and mirroring) is available by specifying `TransformationParameter`s by some of the layers.
|
||||
The [bias](layers/bias.html), [scale](layers/scale.html), and [crop](layers/crop.html) layers can be helpful with transforming the inputs, when `TransformationParameter` isn't available.
|
||||
|
||||
Layers:
|
||||
|
||||
* [Image Data](layers/imagedata.html) - read raw images.
|
||||
* [Database](layers/data.html) - read data from LEVELDB or LMDB.
|
||||
* [HDF5 Input](layers/hdf5data.html) - read HDF5 data, allows data of arbitrary dimensions.
|
||||
* [HDF5 Output](layers/hdf5output.html) - write data as HDF5.
|
||||
* [Input](layers/input.html) - typically used for networks that are being deployed.
|
||||
* [Window Data](layers/windowdata.html) - read window data file.
|
||||
* [Memory Data](layers/memorydata.html) - read data directly from memory.
|
||||
* [Dummy Data](layers/dummydata.html) - for static data and debugging.
|
||||
|
||||
Note that the [Python](layers/python.html) Layer can be useful for create custom data layers.
|
||||
|
||||
## Vision Layers
|
||||
|
||||
Vision layers usually take *images* as input and produce other *images* as output, although they can take data of other types and dimensions.
|
||||
A typical "image" in the real-world may have one color channel ($$c = 1$$), as in a grayscale image, or three color channels ($$c = 3$$) as in an RGB (red, green, blue) image.
|
||||
But in this context, the distinguishing characteristic of an image is its spatial structure: usually an image has some non-trivial height $$h > 1$$ and width $$w > 1$$.
|
||||
This 2D geometry naturally lends itself to certain decisions about how to process the input.
|
||||
In particular, most of the vision layers work by applying a particular operation to some region of the input to produce a corresponding region of the output.
|
||||
In contrast, other layers (with few exceptions) ignore the spatial structure of the input, effectively treating it as "one big vector" with dimension $$chw$$.
|
||||
|
||||
Layers:
|
||||
|
||||
* [Convolution Layer](layers/convolution.html) - convolves the input image with a set of learnable filters, each producing one feature map in the output image.
|
||||
* [Pooling Layer](layers/pooling.html) - max, average, or stochastic pooling.
|
||||
* [Spatial Pyramid Pooling (SPP)](layers/spp.html)
|
||||
* [Crop](layers/crop.html) - perform cropping transformation.
|
||||
* [Deconvolution Layer](layers/deconvolution.html) - transposed convolution.
|
||||
|
||||
* [Im2Col](layers/im2col.html) - relic helper layer that is not used much anymore.
|
||||
|
||||
## Recurrent Layers
|
||||
|
||||
Layers:
|
||||
|
||||
* [Recurrent](layers/recurrent.html)
|
||||
* [RNN](layers/rnn.html)
|
||||
* [Long-Short Term Memory (LSTM)](layers/lstm.html)
|
||||
|
||||
## Common Layers
|
||||
|
||||
Layers:
|
||||
|
||||
* [Inner Product](layers/innerproduct.html) - fully connected layer.
|
||||
* [Dropout](layers/dropout.html)
|
||||
* [Embed](layers/embed.html) - for learning embeddings of one-hot encoded vector (takes index as input).
|
||||
|
||||
## Normalization Layers
|
||||
|
||||
* [Local Response Normalization (LRN)](layers/lrn.html) - performs a kind of "lateral inhibition" by normalizing over local input regions.
|
||||
* [Mean Variance Normalization (MVN)](layers/mvn.html) - performs contrast normalization / instance normalization.
|
||||
* [Batch Normalization](layers/batchnorm.html) - performs normalization over mini-batches.
|
||||
|
||||
The [bias](layers/bias.html) and [scale](layers/scale.html) layers can be helpful in combination with normalization.
|
||||
|
||||
## Activation / Neuron Layers
|
||||
|
||||
In general, activation / Neuron layers are element-wise operators, taking one bottom blob and producing one top blob of the same size. In the layers below, we will ignore the input and out sizes as they are identical:
|
||||
|
||||
* Input
|
||||
- n * c * h * w
|
||||
* Output
|
||||
- n * c * h * w
|
||||
|
||||
Layers:
|
||||
|
||||
* [ReLU / Rectified-Linear and Leaky-ReLU](layers/relu.html) - ReLU and Leaky-ReLU rectification.
|
||||
* [PReLU](layers/prelu.html) - parametric ReLU.
|
||||
* [ELU](layers/elu.html) - exponential linear rectification.
|
||||
* [Sigmoid](layers/sigmoid.html)
|
||||
* [TanH](layers/tanh.html)
|
||||
* [Absolute Value](layers/abs.html)
|
||||
* [Power](layers/power.html) - f(x) = (shift + scale * x) ^ power.
|
||||
* [Exp](layers/exp.html) - f(x) = base ^ (shift + scale * x).
|
||||
* [Log](layers/log.html) - f(x) = log(x).
|
||||
* [BNLL](layers/bnll.html) - f(x) = log(1 + exp(x)).
|
||||
* [Threshold](layers/threshold.html) - performs step function at user defined threshold.
|
||||
* [Bias](layers/bias.html) - adds a bias to a blob that can either be learned or fixed.
|
||||
* [Scale](layers/scale.html) - scales a blob by an amount that can either be learned or fixed.
|
||||
|
||||
## Utility Layers
|
||||
|
||||
Layers:
|
||||
|
||||
* [Flatten](layers/flatten.html)
|
||||
* [Reshape](layers/reshape.html)
|
||||
* [Batch Reindex](layers/batchreindex.html)
|
||||
|
||||
* [Split](layers/split.html)
|
||||
* [Concat](layers/concat.html)
|
||||
* [Slicing](layers/slice.html)
|
||||
* [Eltwise](layers/eltwise.html) - element-wise operations such as product or sum between two blobs.
|
||||
* [Filter / Mask](layers/filter.html) - mask or select output using last blob.
|
||||
* [Parameter](layers/parameter.html) - enable parameters to be shared between layers.
|
||||
* [Reduction](layers/reduction.html) - reduce input blob to scalar blob using operations such as sum or mean.
|
||||
* [Silence](layers/silence.html) - prevent top-level blobs from being printed during training.
|
||||
|
||||
* [ArgMax](layers/argmax.html)
|
||||
* [Softmax](layers/softmax.html)
|
||||
|
||||
* [Python](layers/python.html) - allows custom Python layers.
|
||||
|
||||
## Loss Layers
|
||||
|
||||
Loss drives learning by comparing an output to a target and assigning cost to minimize. The loss itself is computed by the forward pass and the gradient w.r.t. to the loss is computed by the backward pass.
|
||||
|
||||
Layers:
|
||||
|
||||
* [Multinomial Logistic Loss](layers/multinomiallogisticloss.html)
|
||||
* [Infogain Loss](layers/infogainloss.html) - a generalization of MultinomialLogisticLossLayer.
|
||||
* [Softmax with Loss](layers/softmaxwithloss.html) - computes the multinomial logistic loss of the softmax of its inputs. It's conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.
|
||||
* [Sum-of-Squares / Euclidean](layers/euclideanloss.html) - computes the sum of squares of differences of its two inputs, $$\frac 1 {2N} \sum_{i=1}^N \| x^1_i - x^2_i \|_2^2$$.
|
||||
* [Hinge / Margin](layers/hingeloss.html) - The hinge loss layer computes a one-vs-all hinge (L1) or squared hinge loss (L2).
|
||||
* [Sigmoid Cross-Entropy Loss](layers/sigmoidcrossentropyloss.html) - computes the cross-entropy (logistic) loss, often used for predicting targets interpreted as probabilities.
|
||||
* [Accuracy / Top-k layer](layers/accuracy.html) - scores the output as an accuracy with respect to target -- it is not actually a loss and has no backward step.
|
||||
* [Contrastive Loss](layers/contrastiveloss.html)
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: Absolute Value Layer
|
||||
---
|
||||
|
||||
# Absolute Value Layer
|
||||
|
||||
* Layer type: `AbsVal`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1AbsValLayer.html)
|
||||
* Header: [`./include/caffe/layers/absval_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/absval_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/absval_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/absval_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/absval_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/absval_layer.cu)
|
||||
|
||||
* Sample
|
||||
|
||||
layer {
|
||||
name: "layer"
|
||||
bottom: "in"
|
||||
top: "out"
|
||||
type: "AbsVal"
|
||||
}
|
||||
|
||||
The `AbsVal` layer computes the output as abs(x) for each input element x.
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Accuracy and Top-k
|
||||
---
|
||||
|
||||
# Accuracy and Top-k
|
||||
|
||||
`Accuracy` scores the output as the accuracy of output with respect to target -- it is not actually a loss and has no backward step.
|
||||
|
||||
* Layer type: `Accuracy`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1AccuracyLayer.html)
|
||||
* Header: [`./include/caffe/layers/accuracy_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/accuracy_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/accuracy_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/accuracy_layer.cpp)
|
||||
|
||||
## Parameters
|
||||
* Parameters (`AccuracyParameter accuracy_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/AccuracyParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: ArgMax Layer
|
||||
---
|
||||
|
||||
# ArgMax Layer
|
||||
|
||||
* Layer type: `ArgMax`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ArgMaxLayer.html)
|
||||
* Header: [`./include/caffe/layers/argmax_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/argmax_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/argmax_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/argmax_layer.cpp)
|
||||
|
||||
## Parameters
|
||||
* Parameters (`ArgMaxParameter argmax_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/ArgMaxParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Batch Norm Layer
|
||||
---
|
||||
|
||||
# Batch Norm Layer
|
||||
|
||||
* Layer type: `BatchNorm`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1BatchNormLayer.html)
|
||||
* Header: [`./include/caffe/layers/batch_norm_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/batch_norm_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/batch_norm_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/batch_norm_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/batch_norm_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/batch_norm_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`BatchNormParameter batch_norm_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/BatchNormParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: Batch Reindex Layer
|
||||
---
|
||||
|
||||
# Batch Reindex Layer
|
||||
|
||||
* Layer type: `BatchReindex`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1BatchReindexLayer.html)
|
||||
* Header: [`./include/caffe/layers/batch_reindex_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/batch_reindex_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/batch_reindex_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/batch_reindex_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/batch_reindex_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/batch_reindex_layer.cu)
|
||||
|
||||
|
||||
## Parameters
|
||||
|
||||
No parameters.
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Bias Layer
|
||||
---
|
||||
|
||||
# Bias Layer
|
||||
|
||||
* Layer type: `Bias`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1BiasLayer.html)
|
||||
* Header: [`./include/caffe/layers/bias_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/bias_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/bias_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/bias_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/bias_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/bias_layer.cu)
|
||||
|
||||
## Parameters
|
||||
* Parameters (`BiasParameter bias_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/BiasParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: BNLL Layer
|
||||
---
|
||||
|
||||
# BNLL Layer
|
||||
|
||||
* Layer type: `BNLL`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1BNLLLayer.html)
|
||||
* Header: [`./include/caffe/layers/bnll_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/bnll_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/bnll_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/bnll_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/bnll_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/bnll_layer.cu)
|
||||
|
||||
The `BNLL` (binomial normal log likelihood) layer computes the output as log(1 + exp(x)) for each input element x.
|
||||
|
||||
## Parameters
|
||||
No parameters.
|
||||
|
||||
## Sample
|
||||
|
||||
layer {
|
||||
name: "layer"
|
||||
bottom: "in"
|
||||
top: "out"
|
||||
type: BNLL
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Concat Layer
|
||||
---
|
||||
|
||||
# Concat Layer
|
||||
|
||||
* Layer type: `Concat`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ConcatLayer.html)
|
||||
* Header: [`./include/caffe/layers/concat_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/concat_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/concat_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/concat_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/concat_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/concat_layer.cu)
|
||||
* Input
|
||||
- `n_i * c_i * h * w` for each input blob i from 1 to K.
|
||||
* Output
|
||||
- if `axis = 0`: `(n_1 + n_2 + ... + n_K) * c_1 * h * w`, and all input `c_i` should be the same.
|
||||
- if `axis = 1`: `n_1 * (c_1 + c_2 + ... + c_K) * h * w`, and all input `n_i` should be the same.
|
||||
* Sample
|
||||
|
||||
layer {
|
||||
name: "concat"
|
||||
bottom: "in1"
|
||||
bottom: "in2"
|
||||
top: "out"
|
||||
type: "Concat"
|
||||
concat_param {
|
||||
axis: 1
|
||||
}
|
||||
}
|
||||
|
||||
The `Concat` layer is a utility layer that concatenates its multiple input blobs to one single output blob.
|
||||
|
||||
## Parameters
|
||||
* Parameters (`ConcatParameter concat_param`)
|
||||
- Optional
|
||||
- `axis` [default 1]: 0 for concatenation along num and 1 for channels.
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/ConcatParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Contrastive Loss Layer
|
||||
---
|
||||
|
||||
# Contrastive Loss Layer
|
||||
|
||||
* Layer type: `ContrastiveLoss`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ContrastiveLossLayer.html)
|
||||
* Header: [`./include/caffe/layers/contrastive_loss_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/contrastive_loss_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/contrastive_loss_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/contrastive_loss_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/contrastive_loss_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/contrastive_loss_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`ContrastiveLossParameter contrastive_loss_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/ContrastiveLossParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
title: Convolution Layer
|
||||
---
|
||||
|
||||
# Convolution Layer
|
||||
|
||||
* Layer type: `Convolution`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ConvolutionLayer.html)
|
||||
* Header: [`./include/caffe/layers/conv_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/conv_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/conv_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/conv_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cu)
|
||||
* Input
|
||||
- `n * c_i * h_i * w_i`
|
||||
* Output
|
||||
- `n * c_o * h_o * w_o`, where `h_o = (h_i + 2 * pad_h - kernel_h) / stride_h + 1` and `w_o` likewise.
|
||||
|
||||
The `Convolution` layer convolves the input image with a set of learnable filters, each producing one feature map in the output image.
|
||||
|
||||
## Sample
|
||||
|
||||
Sample (as seen in [`./models/bvlc_reference_caffenet/train_val.prototxt`](https://github.com/BVLC/caffe/blob/master/models/bvlc_reference_caffenet/train_val.prototxt)):
|
||||
|
||||
layer {
|
||||
name: "conv1"
|
||||
type: "Convolution"
|
||||
bottom: "data"
|
||||
top: "conv1"
|
||||
# learning rate and decay multipliers for the filters
|
||||
param { lr_mult: 1 decay_mult: 1 }
|
||||
# learning rate and decay multipliers for the biases
|
||||
param { lr_mult: 2 decay_mult: 0 }
|
||||
convolution_param {
|
||||
num_output: 96 # learn 96 filters
|
||||
kernel_size: 11 # each filter is 11x11
|
||||
stride: 4 # step 4 pixels between each filter application
|
||||
weight_filler {
|
||||
type: "gaussian" # initialize the filters from a Gaussian
|
||||
std: 0.01 # distribution with stdev 0.01 (default mean: 0)
|
||||
}
|
||||
bias_filler {
|
||||
type: "constant" # initialize the biases to zero (0)
|
||||
value: 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
## Parameters
|
||||
* Parameters (`ConvolutionParameter convolution_param`)
|
||||
- Required
|
||||
- `num_output` (`c_o`): the number of filters
|
||||
- `kernel_size` (or `kernel_h` and `kernel_w`): specifies height and width of each filter
|
||||
- Strongly Recommended
|
||||
- `weight_filler` [default `type: 'constant' value: 0`]
|
||||
- Optional
|
||||
- `bias_term` [default `true`]: specifies whether to learn and apply a set of additive biases to the filter outputs
|
||||
- `pad` (or `pad_h` and `pad_w`) [default 0]: specifies the number of pixels to (implicitly) add to each side of the input
|
||||
- `stride` (or `stride_h` and `stride_w`) [default 1]: specifies the intervals at which to apply the filters to the input
|
||||
- `group` (g) [default 1]: If g > 1, we restrict the connectivity of each filter to a subset of the input. Specifically, the input and output channels are separated into g groups, and the $$i$$th output group channels will be only connected to the $$i$$th input group channels.
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/ConvolutionParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Crop Layer
|
||||
---
|
||||
|
||||
# Crop Layer
|
||||
|
||||
* Layer type: `Crop`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1CropLayer.html)
|
||||
* Header: [`./include/caffe/layers/crop_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/crop_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/crop_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/crop_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/crop_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/crop_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`CropParameter crop_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/CropParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: Database Layer
|
||||
---
|
||||
|
||||
# Database Layer
|
||||
|
||||
* Layer type: `Data`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1DataLayer.html)
|
||||
* Header: [`./include/caffe/layers/data_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/data_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/data_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/data_layer.cpp)
|
||||
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`DataParameter data_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/DataParameter.txt %}
|
||||
{% endhighlight %}
|
||||
|
||||
* Parameters
|
||||
- Required
|
||||
- `source`: the name of the directory containing the database
|
||||
- `batch_size`: the number of inputs to process at one time
|
||||
- Optional
|
||||
- `rand_skip`: skip up to this number of inputs at the beginning; useful for asynchronous sgd
|
||||
- `backend` [default `LEVELDB`]: choose whether to use a `LEVELDB` or `LMDB`
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: Deconvolution Layer
|
||||
---
|
||||
|
||||
# Deconvolution Layer
|
||||
|
||||
* Layer type: `Deconvolution`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1DeconvolutionLayer.html)
|
||||
* Header: [`./include/caffe/layers/deconv_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/deconv_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/deconv_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/deconv_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/deconv_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/deconv_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
Uses the same parameters as the Convolution layer.
|
||||
|
||||
* Parameters (`ConvolutionParameter convolution_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/ConvolutionParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Dropout Layer
|
||||
---
|
||||
|
||||
# Dropout Layer
|
||||
|
||||
* Layer type: `Dropout`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1DropoutLayer.html)
|
||||
* Header: [`./include/caffe/layers/dropout_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/dropout_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/dropout_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/dropout_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/dropout_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/dropout_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`DropoutParameter dropout_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/DropoutParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Dummy Data Layer
|
||||
---
|
||||
|
||||
# Dummy Data Layer
|
||||
|
||||
* Layer type: `DummyData`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1DummyDataLayer.html)
|
||||
* Header: [`./include/caffe/layers/dummy_data_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/dummy_data_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/dummy_data_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/dummy_data_layer.cpp)
|
||||
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`DummyDataParameter dummy_data_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/DummyDataParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Eltwise Layer
|
||||
---
|
||||
|
||||
# Eltwise Layer
|
||||
|
||||
* Layer type: `Eltwise`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1EltwiseLayer.html)
|
||||
* Header: [`./include/caffe/layers/eltwise_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/eltwise_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/eltwise_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/eltwise_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/eltwise_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/eltwise_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`EltwiseParameter eltwise_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/EltwiseParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: ELU Layer
|
||||
---
|
||||
|
||||
# ELU Layer
|
||||
|
||||
* Layer type: `ELU`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ELULayer.html)
|
||||
* Header: [`./include/caffe/layers/elu_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/elu_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/elu_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/elu_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/elu_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/elu_layer.cu)
|
||||
|
||||
## References
|
||||
|
||||
* Clevert, Djork-Arne, Thomas Unterthiner, and Sepp Hochreiter.
|
||||
"Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)" [arXiv:1511.07289](https://arxiv.org/abs/1511.07289). (2015).
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`ELUParameter elu_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/ELUParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Embed Layer
|
||||
---
|
||||
|
||||
# Embed Layer
|
||||
|
||||
* Layer type: `Embed`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1EmbedLayer.html)
|
||||
* Header: [`./include/caffe/layers/embed_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/embed_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/embed_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/embed_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/embed_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/embed_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`EmbedParameter embed_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/EmbedParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: Euclidean Loss Layer
|
||||
---
|
||||
# Sum-of-Squares / Euclidean Loss Layer
|
||||
|
||||
* Layer type: `EuclideanLoss`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1EuclideanLossLayer.html)
|
||||
* Header: [`./include/caffe/layers/euclidean_loss_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/euclidean_loss_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/euclidean_loss_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/euclidean_loss_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/euclidean_loss_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/euclidean_loss_layer.cu)
|
||||
|
||||
The Euclidean loss layer computes the sum of squares of differences of its two inputs, $$\frac 1 {2N} \sum_{i=1}^N \| x^1_i - x^2_i \|_2^2$$.
|
||||
|
||||
## Parameters
|
||||
|
||||
Does not take any parameters.
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: Exponential Layer
|
||||
---
|
||||
|
||||
# Exponential Layer
|
||||
|
||||
* Layer type: `Exp`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ExpLayer.html)
|
||||
* Header: [`./include/caffe/layers/exp_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/exp_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/exp_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/exp_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/exp_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/exp_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`Parameter exp_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/ExpParameter.txt %}
|
||||
{% endhighlight %}
|
||||
|
||||
## See also
|
||||
|
||||
* [Power layer](power.html)
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Filter Layer
|
||||
---
|
||||
|
||||
# Filter Layer
|
||||
|
||||
* Layer type: `Filter`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1FilterLayer.html)
|
||||
* Header: [`./include/caffe/layers/filter_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/filter_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/filter_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/filter_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/filter_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/filter_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
Does not take any parameters.
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: Flatten Layer
|
||||
---
|
||||
|
||||
# Flatten Layer
|
||||
|
||||
* Layer type: `Flatten`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1FlattenLayer.html)
|
||||
* Header: [`./include/caffe/layers/flatten_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/flatten_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/flatten_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/flatten_layer.cpp)
|
||||
|
||||
The `Flatten` layer is a utility layer that flattens an input of shape `n * c * h * w` to a simple vector output of shape `n * (c*h*w)`.
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`FlattenParameter flatten_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/FlattenParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: HDF5 Data Layer
|
||||
---
|
||||
|
||||
# HDF5 Data Layer
|
||||
|
||||
* Layer type: `HDF5Data`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1HDF5DataLayer.html)
|
||||
* Header: [`./include/caffe/layers/hdf5_data_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/hdf5_data_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/hdf5_data_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/hdf5_data_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/hdf5_data_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/hdf5_data_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`HDF5DataParameter hdf5_data_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/HDF5DataParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: HDF5 Output Layer
|
||||
---
|
||||
|
||||
# HDF5 Output Layer
|
||||
|
||||
* Layer type: `HDF5Output`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1HDF5OutputLayer.html)
|
||||
* Header: [`./include/caffe/layers/hdf5_output_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/hdf5_output_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/hdf5_output_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/hdf5_output_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/hdf5_output_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/hdf5_output_layer.cu)
|
||||
|
||||
The HDF5 output layer performs the opposite function of the other layers in this section: it writes its input blobs to disk.
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`HDF5OutputParameter hdf5_output_param`)
|
||||
- Required
|
||||
- `file_name`: name of file to write to
|
||||
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/HDF5OutputParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Hinge Loss Layer
|
||||
---
|
||||
|
||||
# Hinge (L1, L2) Loss Layer
|
||||
|
||||
* Layer type: `HingeLoss`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1HingeLossLayer.html)
|
||||
* Header: [`./include/caffe/layers/hinge_loss_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/hinge_loss_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/hinge_loss_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/hinge_loss_layer.cpp)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`HingeLossParameter hinge_loss_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/HingeLossParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: Im2col Layer
|
||||
---
|
||||
|
||||
# im2col
|
||||
|
||||
* File type: `Im2col`
|
||||
* Header: [`./include/caffe/layers/im2col_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/im2col_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/im2col_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/im2col_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/im2col_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/im2col_layer.cu)
|
||||
|
||||
`Im2col` is a helper for doing the image-to-column transformation that you most
|
||||
likely do not need to know about. This is used in Caffe's original convolution
|
||||
to do matrix multiplication by laying out all patches into a matrix.
|
||||
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: ImageData Layer
|
||||
---
|
||||
|
||||
# ImageData Layer
|
||||
|
||||
* Layer type: `ImageData`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ImageDataLayer.html)
|
||||
* Header: [`./include/caffe/layers/image_data_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/image_data_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/image_data_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/image_data_layer.cpp)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`ImageDataParameter image_data_parameter`)
|
||||
- Required
|
||||
- `source`: name of a text file, with each line giving an image filename and label
|
||||
- `batch_size`: number of images to batch together
|
||||
- Optional
|
||||
- `rand_skip`
|
||||
- `shuffle` [default false]
|
||||
- `new_height`, `new_width`: if provided, resize all images to this size
|
||||
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/ImageDataParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
title: Infogain Loss Layer
|
||||
---
|
||||
|
||||
# Infogain Loss Layer
|
||||
|
||||
* Layer type: `InfogainLoss`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1InfogainLossLayer.html)
|
||||
* Header: [`./include/caffe/layers/infogain_loss_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/infogain_loss_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/infogain_loss_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/infogain_loss_layer.cpp)
|
||||
|
||||
A generalization of [MultinomialLogisticLossLayer](multinomiallogisticloss.html) that takes an "information gain" (infogain) matrix specifying the "value" of all label pairs.
|
||||
|
||||
Equivalent to the [MultinomialLogisticLossLayer](multinomiallogisticloss.html) if the infogain matrix is the identity.
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`Parameter infogain_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/InfogainLossParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
title: Inner Product / Fully Connected Layer
|
||||
---
|
||||
|
||||
# Inner Product / Fully Connected Layer
|
||||
|
||||
* Layer type: `InnerProduct`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1InnerProductLayer.html)
|
||||
* Header: [`./include/caffe/layers/inner_product_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/inner_product_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/inner_product_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/inner_product_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/inner_product_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/inner_product_layer.cu)
|
||||
|
||||
* Input
|
||||
- `n * c_i * h_i * w_i`
|
||||
* Output
|
||||
- `n * c_o * 1 * 1`
|
||||
* Sample
|
||||
|
||||
layer {
|
||||
name: "fc8"
|
||||
type: "InnerProduct"
|
||||
# learning rate and decay multipliers for the weights
|
||||
param { lr_mult: 1 decay_mult: 1 }
|
||||
# learning rate and decay multipliers for the biases
|
||||
param { lr_mult: 2 decay_mult: 0 }
|
||||
inner_product_param {
|
||||
num_output: 1000
|
||||
weight_filler {
|
||||
type: "gaussian"
|
||||
std: 0.01
|
||||
}
|
||||
bias_filler {
|
||||
type: "constant"
|
||||
value: 0
|
||||
}
|
||||
}
|
||||
bottom: "fc7"
|
||||
top: "fc8"
|
||||
}
|
||||
|
||||
The `InnerProduct` layer (also usually referred to as the fully connected layer) treats the input as a simple vector and produces an output in the form of a single vector (with the blob's height and width set to 1).
|
||||
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`InnerProductParameter inner_product_param`)
|
||||
- Required
|
||||
- `num_output` (`c_o`): the number of filters
|
||||
- Strongly recommended
|
||||
- `weight_filler` [default `type: 'constant' value: 0`]
|
||||
- Optional
|
||||
- `bias_filler` [default `type: 'constant' value: 0`]
|
||||
- `bias_term` [default `true`]: specifies whether to learn and apply a set of additive biases to the filter outputs
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/InnerProductParameter.txt %}
|
||||
{% endhighlight %}
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Input Layer
|
||||
---
|
||||
|
||||
# Input Layer
|
||||
|
||||
* Layer type: `Input`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1InputLayer.html)
|
||||
* Header: [`./include/caffe/layers/input_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/input_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/input_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/input_layer.cpp)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`InputParameter input_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto)):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/InputParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Log Layer
|
||||
---
|
||||
|
||||
# Log Layer
|
||||
|
||||
* Layer type: `Log`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1LogLayer.html)
|
||||
* Header: [`./include/caffe/layers/log_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/log_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/log_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/log_layer.cpp)
|
||||
* CUDA GPU implementation: [`./src/caffe/layers/log_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/log_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`Parameter log_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/LogParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
title: Local Response Normalization (LRN)
|
||||
---
|
||||
|
||||
# Local Response Normalization (LRN)
|
||||
|
||||
* Layer type: `LRN`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1LRNLayer.html)
|
||||
* Header: [`./include/caffe/layers/lrn_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/lrn_layer.hpp)
|
||||
* CPU Implementation: [`./src/caffe/layers/lrn_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/lrn_layer.cpp)
|
||||
* CUDA GPU Implementation: [`./src/caffe/layers/lrn_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/lrn_layer.cu)
|
||||
* Parameters (`LRNParameter lrn_param`)
|
||||
- Optional
|
||||
- `local_size` [default 5]: the number of channels to sum over (for cross channel LRN) or the side length of the square region to sum over (for within channel LRN)
|
||||
- `alpha` [default 1]: the scaling parameter (see below)
|
||||
- `beta` [default 5]: the exponent (see below)
|
||||
- `norm_region` [default `ACROSS_CHANNELS`]: whether to sum over adjacent channels (`ACROSS_CHANNELS`) or nearby spatial locaitons (`WITHIN_CHANNEL`)
|
||||
|
||||
The local response normalization layer performs a kind of "lateral inhibition" by normalizing over local input regions. In `ACROSS_CHANNELS` mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape `local_size x 1 x 1`). In `WITHIN_CHANNEL` mode, the local regions extend spatially, but are in separate channels (i.e., they have shape `1 x local_size x local_size`). Each input value is divided by $$(1 + (\alpha/n) \sum_i x_i^2)^\beta$$, where $$n$$ is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`LRNParameter lrn_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/LRNParameter.txt %}
|
||||
{% endhighlight %}
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: LSTM Layer
|
||||
---
|
||||
|
||||
# LSTM Layer
|
||||
|
||||
* Layer type: `LSTM`
|
||||
* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1LSTMLayer.html)
|
||||
* Header: [`./include/caffe/layers/lstm_layer.hpp`](https://github.com/BVLC/caffe/blob/master/include/caffe/layers/lstm_layer.hpp)
|
||||
* CPU implementation: [`./src/caffe/layers/lstm_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/lstm_layer.cpp)
|
||||
* CPU implementation (helper): [`./src/caffe/layers/lstm_unit_layer.cpp`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/lstm_unit_layer.cpp)
|
||||
* CUDA GPU implementation (helper): [`./src/caffe/layers/lstm_unit_layer.cu`](https://github.com/BVLC/caffe/blob/master/src/caffe/layers/lstm_unit_layer.cu)
|
||||
|
||||
## Parameters
|
||||
|
||||
* Parameters (`Parameter recurrent_param`)
|
||||
* From [`./src/caffe/proto/caffe.proto`](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto):
|
||||
|
||||
{% highlight Protobuf %}
|
||||
{% include proto/RecurrentParameter.txt %}
|
||||
{% endhighlight %}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue