Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated docs #314

Merged
merged 52 commits into from
Nov 3, 2020
Merged

Updated docs #314

merged 52 commits into from
Nov 3, 2020

Conversation

dtrawins
Copy link
Collaborator

No description provided.

shreyas-chaudhari and others added 30 commits October 12, 2020 10:36
…cumentation

Added new documentation files
…cumentation

OpenVINO model server documentation
| -n NETWORK_NAME, --network_name NETWORK_NAME | Network name |
| -l INPUT_LAYER, --input_layer INPUT_LAYER | Input layer name |
| -o OUTPUT_LAYER, --output_layer OUTPUT_LAYER | Output layer name |
| -d INPUT_DIMENSION, --input_dimension INPUT_DIMENSION | Input image dimension |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--frame_size instead of --input_dimension

```

### Step 4: Start the Model Server Container

Start the Model Server container:

```bash
docker run -d -v <folder_with_downloaded_model>:/models/face-detection/1 -p 9000:9000 openvino/model_server:latest \
docker run -d -v <folder_with_downloaded_model>:/models/face-detection -p 9000:9000 openvino/model_server:latest \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will not work, no models/face-detection directory

gRPC code skeleton is created based on TensorFlow Serving core framework with tunned implementation of requests handling.
Services are designed via set of C++ classes managing AI models in Intermediate Representation
format. OpenVINO Inference Engine component executes the graphs operations.
- OpenVINO&trade; Model Server uses [Inference Engine](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_inference_engine_intro.html) libraries from OpenVINO&trade; toolkit in the backend, which speeds up the execution on CPU and enables it on AI accelerators like [Neural Compute Stick 2](https://software.intel.com/content/www/us/en/develop/hardware/neural-compute-stick.html), iGPU(Integrated Graphics Processing Unit), [HDDL](https://docs.openvinotoolkit.org/2018_R5/_docs_IE_DG_supported_plugins_HDDL.html) and FPGAs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FPGA?


- OpenVINO&trade; Model Server requires the models to be present in the local file system or they could be hosted remotely on object storage services. Both Google Cloud Storage and S3 compatible storage are supported. Refer to [Preparing the Models Repository](./models_repository.md) for more details.

- OpenVINO&trade; Model Server is suitable for landing in Kubernetes environment. It can be also hosted on a bare metal server, virtual machine or inside a docker container. It is also suitable for landing in Kubernetes environment.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suitable for landing in Kubernetes environment - mentioned twice

* *PredictResponse* includes a map of outputs serialized by
[TensorProto](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/tensor.proto) and information about the used model spec.

There are two ways in which gRPC request can be submitted for Predict API:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not true

@@ -1,165 +1,115 @@
# Using the OpenVINO&trade; Model Server in a Docker Container
# Installing OpenVINO&trade; Model Server for Linux using Docker Container
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Installing using docker?

README.md Outdated
@@ -17,23 +17,23 @@ Review the [Architecture concept](docs/architecture.md) document for more detail

A few key features:
- Support for multiple frameworks. Serve models trained in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.
- Deploy new [model versions](https://github.com/openvinotoolkit/model_server/blob/main/docs/docker_container.md#model-version-policy) without changing client code.
- Deploy new [model versions](docs/ModelVersionPolicy.md) without changing client code.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

without OVMS restart

* Pre-defined Node Types
* Other Node Types
* <a href="#example">Example Use Case</a>
1. Prepare the models
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i, ii, iii, iv, v?

### Other node types
Internal pipeline nodes are created by user. Currently there is only one node type that a user can create:
* DL model
- This node contains underlying OpenVINO&trade; model and performs inference on selected target device. This can be defined in configuration file. Each model input needs to be mapped to some node's `data_item` - be it input from gRPC/REST request or another `DL model` output. Results of this node's inference may be mapped to another node's input or `response` node meaning it will be exposed in gRPC/REST response.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"be it input from" ?

python3 tests/models/argmax_sum.py --input_size 1001 --export_dir ~/models/tf_argmax
3. Prepare argmax model with `(1, 1001)` input shapes to match output of googlenet and resnet output shapes. Generated model will sum inputs and calculate the index with the highest value. The model output will indicate the most likely predicted class from the ImageNet* dataset. <a name="point-3"></a>
```
~$ python3 tests/models/argmax_sum.py --input_size 1001 --export_dir models/public/argmax/saved_model
Copy link
Collaborator

@pgierasi pgierasi Oct 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incorrect export directory, should be under ~/models/

{"specific": { "versions": List }}
Examples:

{"latest": { "num_versions":2 }} # server will serve only 2 latest versions of model
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

new lines not visible in browser

## Updating model versions
- Served versions are updated online by monitoring file system changes in the model storage. OpenVINO Model Server will add new version to the serving list when new numerical subfolder with the model files is added. The default served version will be switched to the one with the highest number.

- When the model version is deleted from the file system, it will become unavailable on the server and it will release RAM allocation. Updates in the deployed model version files will not be detected and they will not trigger changes in serving.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updates in the deployed deleted model?


3. Run the above code snippet to send POST API request to predict results by providing formatted json as request body using the command :
```Bash
python3 sample.py
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

example not working


```bash
curl --create-dirs https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/1/face-detection-retail-0004.xml -o model/1/face-detection-retail-0004.bin
docker run -d -v $(pwd)/model:/models -p 9000:9000 openvino/model_server:latest --model_path /models --model_name face-detection --port 9000 --shape auto
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[2020-10-22 21:47:38.396] [serving] [error] Couldn't list directories in path: /models
[2020-10-22 21:47:38.396] [serving] [error] Couldn't start model manager
[2020-10-22 21:47:38.396] [serving] [error] ovms::ModelManager::Start() Error: The provided base path is invalid or doesn't exists

docs/host.md Outdated
> * An internet connection is required to follow the steps in this guide.

## Introduction
OpenVINO&trade; Model Server is a Python* implementation of gRPC and RESTful API interfaces defined by Tensorflow serving.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is a Python implementation?

@dtrawins dtrawins requested a review from mzegla October 28, 2020 14:44
Copy link
Collaborator

@mzegla mzegla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my side it's okay, but I see you didn't make all the changes requested by @pgierasi in example_client/README.md

@dtrawins
Copy link
Collaborator Author

@mzegla @pgierasi all requested changes in example_client/README.md are already added

@dtrawins dtrawins merged commit 52d06ef into main Nov 3, 2020
@mgumowsk mgumowsk deleted the shreyas-docs branch December 15, 2020 08:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants