What did the different teams at the Red Cross Hackathon do

FacebookTwitterGoogle+Share

On the 20th of May DataMission organised a hackathon for the Red Cross and 510.global. The day is described here. This blogposting contains more technical details of what the teams did.

Machine Learning model teams

The first team, team015, tried to tweak the inputs and to combine them to thus find a better input for the prediction model. This is called feature engineering. They used RandomForest and Negative Binomial Logistic Regression algorithms. While the data only contained the number of partially damaged and the number of completely damaged houses, they decided to focus on the total number of damaged houses too. While their model was doing very well on the test data, it had an R-squared score of 0.21 on the real data set.

The second team (most teams had no names) used boosting & RandomForest. Boosting was better. They gave Rammasun (one of the typhoons) a different weight than the others, because of its characteristics. Final result was an R-squared of 0.32

The third team also tweaked the inputs. Because the damage caused by wind isn’t linear to the wind speed they used not just the maximum wind speed as a parameter, but also the wind speed squared. They also used the inverse of the distance to the path of the typhoon. They also noticed that typhoon Rammasun was different from the other typhoons.

The next team (Team Sweet & Simple) used two approaches. One, the “statistically sound approach” was Poisson mixed model Lasso (with a random effect per typhoon). The other was simple ensemble learning of 10 different algorithms. Their model probably overfitted, i.e. it worked very well on the trained data, but not so good on the actual dataset. They also looked for other datasets to enrich the data from the Red Cross, but they didn’t find any. They were the only ‘Data Science’ team that also made a nice visualisation of their predictions on a map.

The final team (team negative R) noticed, like team015, that in the data the number of damaged households was sometimes higher than the number of actual households! This surprising fact comes because the number of households was measured in 2010, but the number of damaged households in 2015. They started with simple linear models, to find out which parameters were important (looking at AIC). They then filtered out correlated inputs. Then they used random forests. While this approach was used by other teams as well, the predictions of this team was the best (R-squared of 0.68), and so they won!

Visualisation teams

The first team put themselves first “in the shoes” of those who were going to use these visualisations. They realised that a typhoon does much more damage than just the houses. Roads are damaged as are medical centres. They combined those damages per municipality to divide the areas in areas with high, medium & low priority. This based on severity of the damages.

They not only plot it on a map, but they also rank the municipalities. On 4 scales:

  • one based on health
  • housing
  • infrastructure
  • and an overall ranking based on all 3.

To show the accuracy of the model they prototyped a dashboard where one can select a historical typhoon and then they show the predicted top 7 of most affected municipalities, and the actual top 7. Colors are then used to show differences. The areas where a model did and didn’t work should also be plotted on a map. Is there perhaps something in a certain part of the country that the model missed? They also showed whether the earlier predictions about older typhoons, where there was less data, were less accurate.

The second team also characterised those who would use their visualisation. They used two axes to define the users. Do they have high data literacy, or low? And do they require info on the whole country, or just a few provinces?

They use the map to tell a story:

  • What happened? (i.e. what was the wind speed)
  • what is the damage? (i.e. how many houses were destroyed)
  • who was affected (this using a methodology based on the inform index, inform index)
  • which agencies are on the ground and can help.

To give a good overview of the data the showed it not only on a map, but also as bar charts that showed the number of affected houses per municipality. After all, some municipalities that have the same size on the map can have a completely different number of houses. They also show the timeline at the bottom. They thus show not just what is happening on the ground, but also when. Just like Ushahidi they want to involve citizens in measuring the damage, to get timely results.

Team two won, after counting the votes from the audience.