Implementation:Tensorflow Serving Multi Inference Test
| Knowledge Sources | |
|---|---|
| Domains | Testing, Inference |
| Last Updated | 2026-02-13 00:00 GMT |
Overview
Test suite validating the TensorFlow multi-inference runner and RunMultiInference function which execute multiple inference tasks on a single model.
Description
This test file validates two code paths for multi-inference: the TensorFlowMultiInferenceRunner class and the direct RunMultiInference function. The typed test suite is parameterized on TF1 vs TF2 model types and sets up a ServerCore with the half_plus_two SavedModel. Each test obtains a ServableHandle<SavedModelBundle> and creates a TensorFlowMultiInferenceRunner to execute multi-inference requests.
The tests validate:
- Both the runner and direct function paths in parallel
- Input validation and signature resolution
- Model spec consistency across inference tasks
- Successful single and multiple regression/classification operations
- Thread pool options propagation
Usage
Run these tests to validate changes to the multi-inference execution path, including the runner class and the standalone RunMultiInference function.
Code Reference
Source Location
- Repository: Tensorflow_Serving
- File: tensorflow_serving/servables/tensorflow/multi_inference_test.cc
- Lines: 1-450
Test Fixture
template <typename T>
class MultiInferenceTest : public ::testing::Test {
public:
static void SetUpTestSuite() {
SetSignatureMethodNameCheckFeature(UseTf1Model());
TF_ASSERT_OK(CreateServerCore(&server_core_));
}
protected:
absl::Status GetInferenceRunner(
std::unique_ptr<TensorFlowMultiInferenceRunner>* inference_runner) {
ServableHandle<SavedModelBundle> bundle;
ModelSpec model_spec;
model_spec.set_name(kTestModelName);
TF_RETURN_IF_ERROR(GetServerCore()->GetServableHandle(model_spec, &bundle));
inference_runner->reset(new TensorFlowMultiInferenceRunner(
bundle->session.get(), &bundle->meta_graph_def,
{this->servable_version_}));
return absl::OkStatus();
}
absl::Status GetServableHandle(ServableHandle<SavedModelBundle>* bundle);
};
Build Target
bazel test //tensorflow_serving/servables/tensorflow:multi_inference_test
Test Coverage
Key Test Cases
| Test Name | Category | Description |
|---|---|---|
| MissingInputTest | Validation | Tests error on empty input for both runner and direct paths |
| UndefinedSignatureTest | Validation | Tests error on non-existent signature |
| InconsistentModelSpecsInRequestTest | Validation | Tests error when tasks reference different models |
| EvaluateDuplicateSignaturesTest | Validation | Tests error on duplicate signatures in request |
| UsupportedSignatureTypeTest | Validation | Tests error on unsupported method types |
| ValidSingleSignatureTest | Integration | Tests successful single regression via runner and direct path |
| MultipleValidRegressSignaturesTest | Integration | Tests multiple regression signatures in one request |
| RegressAndClassifySignaturesTest | Integration | Tests mixed regression and classification signatures |
| ThreadPoolOptions | Integration | Tests thread pool options propagation |
Usage Examples
Test Pattern
TYPED_TEST_P(MultiInferenceTest, MissingInputTest) {
std::unique_ptr<TensorFlowMultiInferenceRunner> inference_runner;
TF_ASSERT_OK(this->GetInferenceRunner(&inference_runner));
MultiInferenceRequest request;
PopulateTask("regress_x_to_y", kRegressMethodName, request.add_tasks());
MultiInferenceResponse response;
// Test via runner
ExpectStatusError(
inference_runner->Infer(RunOptions(), request, &response),
absl::StatusCode::kInvalidArgument, "Input is empty");
// Test via direct function
ServableHandle<SavedModelBundle> bundle;
TF_ASSERT_OK(this->GetServableHandle(&bundle));
ExpectStatusError(
RunMultiInference(RunOptions(), bundle->meta_graph_def,
this->servable_version_, bundle->session.get(),
request, &response),
absl::StatusCode::kInvalidArgument, "Input is empty");
}