Implementation:Shiyu coder Kronos WebUI App
| Knowledge Sources | |
|---|---|
| Domains | Web_Application, Financial_Prediction, Visualization |
| Last Updated | 2026-02-09 14:00 GMT |
Overview
Flask web application backend for the Kronos Web UI that provides REST API endpoints for loading financial data, managing Kronos model lifecycle, and executing candlestick predictions with interactive Plotly chart visualization.
Description
The WebUI App is the central backend component of the Kronos web interface. It creates a Flask application with CORS support and exposes REST API routes for:
- Scanning and loading financial data files (CSV/Feather formats)
- Loading one of three Kronos model variants (mini at 4.1M params, small at 24.7M, base at 102.3M)
- Executing predictions using KronosPredictor.predict() with configurable parameters (lookback, prediction length, temperature, top_p, sample_count)
- Generating interactive Plotly candlestick charts comparing predicted vs actual data
- Persisting prediction results as timestamped JSON files
The application stores tokenizer, model, and predictor as global state and serves on port 7070.
Usage
Use this module to run a web-based interface for Kronos time-series prediction. It bridges the Kronos ML model and a browser-based frontend, handling data loading, model orchestration, prediction execution, chart generation, and result persistence.
Code Reference
Source Location
- Repository: Shiyu_coder_Kronos
- File: webui/app.py
- Lines: 1-708
Signature
# Flask application module — key functions and routes:
def load_data_files() -> list:
"""Scan data directory and return available data files."""
def load_data_file(file_path: str) -> tuple:
"""Load CSV/Feather file into DataFrame with OHLC validation."""
def save_prediction_results(
file_path: str,
prediction_type: str,
prediction_results: list,
actual_data: list,
input_data: pd.DataFrame,
prediction_params: dict
) -> str:
"""Save prediction results to timestamped JSON file."""
def create_prediction_chart(
df: pd.DataFrame,
pred_df: pd.DataFrame,
lookback: int,
pred_len: int,
actual_df: pd.DataFrame = None,
historical_start_idx: int = 0
) -> str:
"""Create Plotly candlestick chart as JSON string."""
# Routes:
# GET / → index page
# GET /api/data-files → list available data files
# POST /api/load-data → load and validate data file
# POST /api/predict → run prediction workflow
# POST /api/load-model → load Kronos model variant
# GET /api/available-models → list model configurations
# GET /api/model-status → check model loaded status
Import
# This is a standalone Flask application, not typically imported.
# Run directly:
# python webui/app.py
# Or import for testing:
from webui.app import app, load_data_file, create_prediction_chart
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| file_path | str | Yes | Path to CSV or Feather data file with OHLC columns |
| lookback | int | No | Number of historical data points (default: 400) |
| pred_len | int | No | Number of future points to predict (default: 120) |
| temperature | float | No | Sampling temperature for generation (default: 1.0) |
| top_p | float | No | Top-p nucleus sampling threshold (default: 0.9) |
| sample_count | int | No | Number of prediction samples (default: 1) |
| model_key | str | No | Model variant: kronos-mini, kronos-small, kronos-base (default: kronos-small) |
| device | str | No | Compute device: cpu or cuda (default: cpu) |
| start_date | str | No | Optional start date for custom time window |
Outputs
| Name | Type | Description |
|---|---|---|
| chart | str (JSON) | Plotly candlestick chart encoded as JSON string |
| prediction_results | list[dict] | List of predicted OHLCV candle dicts with timestamps |
| actual_data | list[dict] | Ground truth OHLCV data for comparison (if available) |
| prediction JSON file | File | Timestamped JSON file saved to webui/prediction_results/ |
Usage Examples
Starting the Web Server
cd /path/to/Kronos
python webui/app.py
# Server starts on http://0.0.0.0:7070
Loading a Model via API
import requests
# Load the Kronos-small model on CPU
response = requests.post("http://localhost:7070/api/load-model", json={
"model_key": "kronos-small",
"device": "cpu"
})
print(response.json())
# {'success': True, 'message': 'Model loaded successfully: Kronos-small (24.7M) on cpu', ...}
Running a Prediction
import requests
# Run prediction on a data file
response = requests.post("http://localhost:7070/api/predict", json={
"file_path": "/path/to/BTC_USDT-5m.csv",
"lookback": 400,
"pred_len": 120,
"temperature": 1.0,
"top_p": 0.9,
"sample_count": 1
})
result = response.json()
print(f"Generated {len(result['prediction_results'])} prediction points")
print(f"Has comparison data: {result['has_comparison']}")