Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Liu00222 Open Prompt Injection eval helper

From Leeroopedia
Knowledge Sources
Domains Evaluation, NLP
Last Updated 2026-02-14 15:00 GMT

Overview

Concrete evaluation dispatcher function for scoring model responses against labels or other responses, provided by the OpenPromptInjection evaluator utils module.

Description

The eval_helper function is the core evaluation building block. It dispatches to task-specific evaluation functions (eval_sst2, eval_spam, eval_hsol, eval_mrpc, eval_rte, eval_gigaword) based on the dataset name. For classification tasks, it parses model output into labels and performs exact match. For generation tasks (gigaword), it computes ROUGE-1 F-score. It supports two modes: label comparison (`dp2_is_label=True`) and response comparison (`dp2_is_label=False` for MR calculation).

Usage

This function is called internally by the Evaluator's metric calculation methods (`__calc_PNA_T`, `__calc_PNA_I`, `__calc_ASV`, `__calc_MR`). It can also be used directly for custom evaluation needs.

Code Reference

Source Location

Signature

def eval_helper(dataset_name, dp1, dp2, dp2_is_label=True):
    """
    Task-specific evaluation of a response against a label or another response.

    Args:
        dataset_name (str): One of 'sst2', 'sms_spam', 'hsol', 'mrpc', 'rte',
                            'gigaword', 'compromise'.
        dp1 (str): Model response to evaluate.
        dp2: Ground truth label (int/str) or another response (str).
        dp2_is_label (bool): True for label comparison (PNA-T/PNA-I/ASV),
                             False for response comparison (MR).
    Returns:
        float or bool: 1/0 for classification, ROUGE-1 F-score for gigaword.
    """

Import

from OpenPromptInjection.evaluator.utils import eval_helper

I/O Contract

Inputs

Name Type Required Description
dataset_name str Yes Dataset identifier: `'sst2'`, `'sms_spam'`, `'hsol'`, `'mrpc'`, `'rte'`, `'gigaword'`, `'compromise'`
dp1 str Yes Model response string
dp2 str, int, or float Yes Ground truth label or reference response
dp2_is_label bool No `True` for label comparison, `False` for response comparison (default `True`)

Outputs

Name Type Description
score float or bool 1.0/0.0 (True/False) for classification tasks; ROUGE-1 F-score (0.0-1.0) for gigaword

Usage Examples

Evaluating a Classification Response

from OpenPromptInjection.evaluator.utils import eval_helper

# Compare response to ground truth label (PNA-T/ASV mode)
score = eval_helper("sst2", "The sentiment is positive.", 1, dp2_is_label=True)
# Returns: True (eval_sst2 extracts "positive" -> 1, matches label 1)

# Compare two responses (MR mode)
score = eval_helper("sst2", "positive", "The answer is positive", dp2_is_label=False)
# Returns: True (both parse to label 1)

Evaluating a Generation Response

score = eval_helper("gigaword", "us stocks rise on earnings", "us stocks climb on strong earnings", dp2_is_label=True)
# Returns: ROUGE-1 F-score (e.g., 0.72)

Related Pages

Implements Principle

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment