Principle:Tensorflow Serving Kubernetes Service Query
| Knowledge Sources | |
|---|---|
| Domains | Inference, Kubernetes |
| Last Updated | 2026-02-13 17:00 GMT |
Overview
An inference query technique that sends gRPC predict requests to a TensorFlow Serving deployment exposed via a Kubernetes LoadBalancer service.
Description
Querying a Kubernetes-deployed TensorFlow Serving instance uses the same gRPC client pattern as querying a local server, but targets the LoadBalancer's external IP. The client:
- Creates a gRPC channel to the LoadBalancer external IP on port 8500
- Constructs a PredictRequest with model name, signature, and input tensors
- Sends the request via PredictionServiceStub.Predict()
- Parses the response output tensors
The LoadBalancer distributes requests across all healthy pods, providing horizontal scaling. Clients are unaware of the number of replicas.
Usage
Use after deploying the Kubernetes resources and obtaining the LoadBalancer external IP. The client code is identical whether targeting a local server, Docker container, or Kubernetes deployment — only the server address changes.
Theoretical Basis
# Abstract Kubernetes query pattern (NOT real implementation)
external_ip = kubectl_get_service_ip("resnet-service")
channel = create_grpc_channel(f"{external_ip}:8500")
stub = PredictionServiceStub(channel)
request = build_predict_request(
model="resnet",
signature="serving_default",
input_tensor=preprocess_image(image)
)
response = stub.Predict(request, timeout=10.0)
predicted_class = argmax(response.outputs["activation_49"])