Explainable artificial intelligence (XAI) is a branch of AI that helps users to peek inside the black-box of AI models to understand how their output is generated and whether their forecasts can be trusted. Recently, XAI has gained prominence in computer vision tasks such as image recognition, where understanding model decisions is critical. Building on its success in this field, it is now gradually being extended to various fields where trust and transparency are particularly important, including healthcare, transportation, and finance.
Researchers at EPFL’s Wind Engineering and Renewable Energy Laboratory (WiRE) have tailored XAI to the black-box AI models used in their field. In a study appearing in Applied Energy, they found that XAI can improve the interpretability of wind power forecasting by providing insight into the string of decisions made by a black-box model and can help identify which variables should be used in a model’s input.
“Before grid operators can effectively integrate wind power into their smart grids, they need reliable daily forecasts of wind energy generation with a low margin of error,” says Prof. Fernando Porté-Agel, who’s the head of WiRE. “Inaccurate forecasts mean grid operators have to compensate at the last minute, often using more expensive fossil fuel-based energy.”
More credible and reliable predictions
The models currently used to forecast wind power output are based on fluid dynamics, weather modeling, and statistical methods – yet they still have a non-negligible margin of error. AI has enabled engineers to improve wind power predictions by using extensive data to identify patterns between weather model variables and wind turbine power output. Most AI models, however, function as "black boxes," making it challenging to understand how they arrive at specific predictions. XAI addresses this issue by providing transparency on the modeling processes leading to the forecasts, resulting in more credible and reliable predictions.
Most important variables
To carry out their study, the research team trained a neural network by selecting input variables from a weather model with a significant influence on wind power generation – such as wind direction, wind speed, air pressure, and temperature – alongside data collected from wind farms in Switzerland and worldwide. “We tailored four XAI techniques and developed metrics for determining whether a technique’s interpretation of the data is reliable,” says Wenlong Liao, the study’s lead author and a postdoc at WiRE.
In machine learning, metrics are what engineers use to evaluate the model performance. For example, metrics can show whether the relationship between two variables is causation or correlation. They’re developed for specific applications – diagnosing a medical condition, measuring the number of hours lost to traffic congestion or calculating a company’s stock-market valuation. “In our study, we defined various metrics to evaluate the trustworthiness of XAI techniques. Moreover, trustworthy XAI techniques can pinpoint which variables we should factor into our models to generate reliable forecasts,” says Liao. “We even saw that we could leave certain variables out of our models without making them any less accurate.”
More competitive
According to Jiannong Fang – an EPFL scientist and co-author of the study – these findings could help make wind power more competitive. “Power system operators won’t feel very comfortable relying on wind power if they don’t understand the internal mechanisms that their forecasting models are based on,” he says. “But with XAI-based approach, models can be diagnosed and upgraded, hence generate more reliable forecasts of daily wind power fluctuations.”