Skip to main content


Metrics are shown for every inference run of a forecast (we call them Forecast Batches), and are used to evaluate the reliability and accuracy of our model versions. They can be filtered by start and end date and within each column.


Metrics are shown for all of your default forecasts. You'll need to add new forecasts to your defaults by searching for them, clicking save, and then refreshing the Metrics page.


Each row of the Metrics table is a unique forecast batch.


Coming Soon: A way for users to filter for a vintage (e.g. Bidclose) and download every forecast from that vintage over a period of time.

TargetThe name of the target.
Model #The model (or version) number that was used to generate the forecasts.
BatchThe primary key for a given forecast run.
RuntimeTime at which the forecast was run; mouseover for isoformat.
StartThe first timestamp that was forecast as part of the run.
EndThe last timestamp that was forecast as part of the run.
ExpectedThe number of forecasts expected to be generated.
DeliveredThe number of forecasts actually generated.
ScoredThe number of forecasts for which an actual value has since been reported. Checks for actuals every 4 hours.
Scored %The percentage of scored forecasts as percentage of the number delivered.
MAEThe Mean Absolute Error of the forecast batch.
RMSEThe Root Mean Squared Error of the forecast batch.
MAPEThe Mean Absolute Percentage Error of the forecast batch.