Consensus Economics – Asia Pacific 2020 Forecast Accuracy Award Winners

London, May 4, 2021: Consensus Economics, the world’s leading economic survey organization, has announced the recipients of its 2020 Forecast Accuracy Award (FAA). The FAA program recognises the achievements of a select group of expert country economic forecasters who have most accurately predicted the final outturns of GDP growth and Consumer Price Inflation in their targeted economies for the year 2020.

 

China Forecast Accuracy 2020

 

FAA winners vary from year to year and across variables, due to shocks, surprises and cyclical and structural adjustments. However, the winners of the 2020 FAA program have been recognised for their high quality research, their commitment to regular forecasts and their ability to identify most accurately the trends and levels of key indicators over the 24 month forecasting cycle.

2020 Forecast Accuracy Award Winners

Country Company Economists
Australia Capital Economics Ben Udy,
Marcel Thieliant
China DBS Bank Chris Leung,
Nathan Chow,
Samuel Tse
Hong Kong Bank of America Miao Ouyang
Indonesia JP Morgan Chase Sin Beng Ong
Japan Morgan Stanley Takeshi Yamaguchi
Malaysia Nomura Euben Paracuelles,
Rangga Cipta
New Zealand Oxford Economics Ruchira Ray
Philippines Capital Economics Alex Holmes
Singapore Nomura Euben Paracuelles,
Charnon Boonnuch
South Korea FERI The FERI Economics Team
Taiwan Credit Suisse Irene Feng
Thailand Economist Intelligence Unit Bryan Tse
Vietnam HSBC Yun Liu

2020 Accuracy Award Winners in Our Other Regions

 

Consensus Economics collects forecasts from over 1,000 economists around the world each month. It was founded in 1989 to measure consensus expectations, which are seen as macroeconomic forecast benchmarks by investment and planning managers, as well as government and public sector institutions, who find the Consensus data useful, timely and accurate.

Each monthly Consensus Forecasts publication shows the average (mean) forecasts, as well as individual forecaster predictions. It is distributed in hard-copy, PDF, and Excel formats, and is also available across major statistical data platforms.
Click Here to Download a Sample

Forecast Accuracy Awards – Methodology Note

The Forecast Accuracy Awards were determined using Mean Absolute Error analysis. The errors measured were the differences between the forecasts and the actual data outturns. The forecaster with the lowest average error rate was deemed to have been the most accurate over the testing period.

The Forecasts Used The forecasts we examined were our panellists’ monthly survey contributions for 2020 Real GDP growth and 2020 Consumer Price Inflation (CPI) expectations (annual average % change). We began collecting these forecasts in January 2019, and the 24 month forecasting cycle ended in December 2020, providing 24 monthly data points to test for each variable.

The Outturns Used The outturns we used as a comparison to the forecasts were the official estimates of 2020 GDP and CPI, which were released between January and April 2021, depending on the country and variable concerned.

The Calculation When calculating the error (the difference between the forecast and the outturn) we looked at the absolute errors, ignoring sign, as errors resulting from over-estimation and under-estimation are equivalent. We then calculated the mean average absolute error for the forecasts of each panellist for both GDP and CPI over the 24 month forecasting period.

Establishing the Most Accurate Forecaster To determine the most accurate forecaster for a given year, we added the mean absolute error rates for GDP and Inflation to identifiy the panellist with the lowest overall error rate. Smaller errors are best when considering accuracy, and to win the award requires the panellist to have exhibited a strong forecasting performance across both GDP and Inflation variables over the 24 month forecasting horizon.

Qualification To be considered for the award, a panellist needed to have participated consistently in our monthly surveys over the 24 month forecasting period. This ensured that no panellist gained an advantage through non-participation, or by only providing forecasts nearer the end of the 24 month cycle, when the steady release of official data can assist forecasters in making revisions.