Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Tensorflow Serving Tfrt Regressor

From Leeroopedia
Knowledge Sources
Domains Model Serving, Regression
Last Updated 2026-02-13 00:00 GMT

Overview

Provides TFRT-based regression inference pipeline that validates, executes, and post-processes regression requests against a TFRT SavedModel.

Description

The TFRT Regressor module implements regression inference using the TFRT SavedModel backend with a three-phase pipeline. PreProcessRegression validates that the function metadata has exactly one input tensor named "inputs" (kRegressInputs) and exactly one output tensor named "outputs" (kRegressOutputs). PostProcessRegressionResult validates that the output tensor has the correct shape (either [batch_size] or [batch_size, 1]), is of type DT_FLOAT, and that batch sizes match, then converts the output into a RegressionResult protobuf containing per-example regression values. RunRegress orchestrates the pipeline: resolving the function name from the request signature, running pre-processing, serializing input examples, invoking the TFRT SavedModel, recording TFRT runtime latency, and running post-processing.

Usage

Use this module when serving regression requests through the TFRT runtime. It is called by TfrtSavedModelServable::Regress and during TFRT model warmup. This is the TFRT counterpart to the standard TensorFlow regressor module.

Code Reference

Source Location

  • Repository: Tensorflow_Serving
  • Files:
    • tensorflow_serving/servables/tensorflow/tfrt_regressor.h (lines 1-50)
    • tensorflow_serving/servables/tensorflow/tfrt_regressor.cc (lines 1-164)

Signature

// Validate function's input and output.
Status PreProcessRegression(const tfrt::FunctionMetadata& function_metadata);

// Validate all results and populate a RegressionResult.
Status PostProcessRegressionResult(
    int num_examples, const std::vector<string>& output_tensor_names,
    const std::vector<Tensor>& output_tensors, RegressionResult* result);

// Run Regression.
Status RunRegress(const tfrt::SavedModel::RunOptions& run_options,
                  const absl::optional<int64_t>& servable_version,
                  tfrt::SavedModel* saved_model,
                  const RegressionRequest& request,
                  RegressionResponse* response);

Import

#include "tensorflow_serving/servables/tensorflow/tfrt_regressor.h"

I/O Contract

Inputs

Name Type Required Description
run_options tfrt::SavedModel::RunOptions Yes Runtime options for TFRT execution
servable_version absl::optional<int64_t> No Model version to set in the response ModelSpec
saved_model tfrt::SavedModel* Yes Loaded TFRT SavedModel
request RegressionRequest Yes Regression request containing model_spec and input examples

Outputs

Name Type Description
response RegressionResponse* Populated with model_spec and RegressionResult containing per-example float values
return Status OK on success; InvalidArgument for wrong tensor counts/shapes/types; FailedPrecondition for missing inputs/outputs

Usage Examples

Running Regression

tfrt::SavedModel::RunOptions run_options;
RegressionRequest request;
request.mutable_model_spec()->set_name("my_model");
request.mutable_model_spec()->set_signature_name("regress_x_to_y");
// Populate request.mutable_input()

RegressionResponse response;
Status status = RunRegress(run_options, /*servable_version=*/1,
                           saved_model, request, &response);

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment