EvaluationWindow
EvaluationWindow
dataclass
A single evaluation window on which the forecast accuracy is measured.
Corresponds to a single train/test split of the time series data at the provided cutoff.
You should never manually create EvaluationWindow objects. Instead, use Task.iter_windows()
or Task.get_window() to obtain the evaluation windows corresponding to the task.
Source code in src/fev/task.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | |
Attributes
cutoff: int | str
instance-attribute
horizon: int
instance-attribute
min_context_length: int
instance-attribute
max_context_length: int | None
instance-attribute
id_column: str
instance-attribute
timestamp_column: str
instance-attribute
target_columns: list[str]
instance-attribute
known_dynamic_columns: list[str]
instance-attribute
past_dynamic_columns: list[str]
instance-attribute
static_columns: list[str]
instance-attribute
Functions
get_input_data() -> tuple[datasets.Dataset, datasets.Dataset]
Get data available to the model at prediction time for this evaluation window.
To convert the input data to a different format, use fev.convert_input_data.
Returns:
| Name | Type | Description |
|---|---|---|
past_data |
Dataset
|
Historical observations up to the cutoff point. Contains: id, timestamps, target values, static covariates, and all dynamic covariates. Columns corresponding to |
future_data |
Dataset
|
Known future information for the forecast horizon. Columns corresponding to |
Source code in src/fev/task.py
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | |
get_ground_truth() -> datasets.Dataset
Get ground truth future test data.
This data should never be provided to the model!
This is a convenience method that exists for debugging and additional evaluation.
Source code in src/fev/task.py
115 116 117 118 119 120 121 122 123 | |
compute_metrics(predictions: datasets.DatasetDict, metrics: list[Metric], seasonality: int, quantile_levels: list[float]) -> dict[str, float]
Compute accuracy metrics on the predictions made for this window.
To compute metrics on your predictions, use Task.evaluation_summary instead.
This is a convenience method that exists for debugging and additional evaluation.
Source code in src/fev/task.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 | |