Skip to main content

Dashboard Forecasts

You should use this tutorial if you're planning to extract, transform and load our data into a data warehouse. It returns more comprehensive data than our /latest API but will include some extra parsing of the response before loading it into a typical data table.

The dashboard forecasts API returns all of the forecasts for your saved price nodes. It can provide forecasts from multiple machine learning models and many vintages.

note

We highly recommend accessing our APIs from a service or developers account if you're planning to use the responses in production. If you don't have an account, please reach out!

Confirm Access

  1. Log into the application at https://app.enertel.ai/app/grid
  2. Search for and select a price node and series in which you're interested
  3. Save the selection

You have just added a favourite forecast to your saved dashboard, which will be the default response to the dashboard forecasts API, for which the tutorial continues below.

Create a Token

  1. Click the user icon in the top right, create a token, and save it somewhere for safekeeping!

Accessing the Endpoint


import pandas as pd
import requests

token = '<api-token>'

start = '2024-05-17' # isoformat date
end = '2024-05-20' # isoformat date
models = None # defaults to our currently promoted 'best' model, if None is given
features = None # defaults to your saved defaults, if None is given


url = f"http://app.enertel.ai/api/dashboard/forecasts"
params = {
"start" : start,
"end" : end,
}

if models:
params["models"] = ",".join([str(m) for m in models])

if features:
params["features"] = ",".join([str(f) for f in features])

resp = requests.get(
url,
headers={"Authorization": f"Bearer {token}"},
params=params,
)

print(resp.json())


Interpreting the Response

Instead of a nested JSON, you may want a simple dataframe that includes all the relevant information. Here it is!

forecasts = []

for obj in resp.json(): # each object is a price node
for target in obj["targets"]: # each target is a unique iso/data series/horizon (for example, ERCOT DALMP 72 hours ahead)
for vintage in target["vintages"]: # each vintage is a unique time the models were scheduled to generated forecasts
for batch in vintage["batches"]: # each batch is a unique model_id/feature_id combination (many models can run for the same feature and vintage)
for forecast in batch["forecasts"]: # each forecast is a unique timestamp in the horizon
forecasts.append({
"object_name" : obj["object_name"],
"target_id": target["target_id"],
"target_description": target["description"],
"scheduled_at": vintage["scheduled_at"],
"batch_id": batch["batch_id"],
"model_id": batch["model_id"],
"feature_id": batch["feature_id"],
**forecast
}
)

df = pd.DataFrame(forecasts)

df.head()

info

We use the VS Code extension "Data Wrangler" to visualize the dataframe. Try it out!