Skip to content

Commit d8bf90e

Browse files
Update PA docs links (#14)
1 parent 82e8262 commit d8bf90e

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

  • Conceptual_Guide/Part_2-improving_resource_utilization

Conceptual_Guide/Part_2-improving_resource_utilization/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ tritonserver --model-repository=/models
150150

151151
### Measuring Performance
152152

153-
Having made some improvements to the model's serving capabilities by enabling `dynamic batching` and the use of `multiple model instances`, the next step is to measure the impact of these features. To that end, the Triton Inference Server comes packaged with the [Performance Analyzer](https://github.com/triton-inference-server/server/blob/main/docs/perf_analyzer.md) which is a tool specifically designed to measure performance for Triton Inference Servers. For ease of use, it is recommended that users run this inside the same container used to run client code in Part 1 of this series.
153+
Having made some improvements to the model's serving capabilities by enabling `dynamic batching` and the use of `multiple model instances`, the next step is to measure the impact of these features. To that end, the Triton Inference Server comes packaged with the [Performance Analyzer](https://github.com/triton-inference-server/client/blob/main/src/c++/perf_analyzer/README.md) which is a tool specifically designed to measure performance for Triton Inference Servers. For ease of use, it is recommended that users run this inside the same container used to run client code in Part 1 of this series.
154154
```
155155
docker run -it --net=host -v ${PWD}:/workspace/ nvcr.io/nvidia/tritonserver:yy.mm-py3-sdk bash
156156
```

0 commit comments

Comments
 (0)