During the conference, I noted that there were a few errors that people made when presenting data. So I wanted to post about them here, in hopes that it will prevent some others from making them in the future.
- There was a presenter who presented a simple bar graph. I’ve made a fake one shown below. The bar graph, like the one below, had error bars that obviously overlapped. And yet, the presenter stated that Data B was significantly better than Data A. No, it’s quite clear from the graph, that you can’t state that one is better (or worse) than the others. That’s what the error bars are for. They show the variance of the data, and if they overlap, it means there is a chance that the two data points are equal.
- The second thing I noticed, was a presenter who also used a bar graph, this time with no error bars (like the one below). This time, the presenter said that Data A is 20% better than Data B (if not clear, Data A is 80, and Data B is 60 on the graph below). First of all, 80 is not 20% bigger than 60%. It’s actually 33% bigger. And second, the difference between to two numbers is not equivalent to how much “better” or worse a value is. Again, here we have a problem involving error bars, this time because they’re missing. Without the error bars (or at least more information about the variance of the data) it’s impossible to know if one value is significantly better/worse than the other.
- The third issue that I noticed, was comparing data to others. It can be really useful to compare your results to previous results, especially if your results are building on the previous results. In that case, you definitely want to be able to say how your data/algorithm/solution/whatever compares to the previous one. However, in this case, it was not possible to directly compare between the new results and the old, because there were too many things that didn’t match up. Couldn’t use the same computer specs. One could do something the other couldn’t. Etc, etc. However, this presenter still graphed the information onto the same graph, to show how their solution is “better.” Which really doesn’t make any sense. In this case, if you can’t make a direct comparison, it’s better to a) admit you can’t (which at least this person was honest about), b) don’t compare the two on the same graph, c) focus on how good your solution is on its own, and what needs to be done in order to actually compare the two in the future.
The above three examples were ones that jumped out at me while listening to the presentations. I’m sure there may have been others, but, honestly, by the end of the week my ability to both pay attention carefully and stay focused through a whole presentation, was almost completely gone (remember, conferences are draining). However, the above errors also make the presenter look less than completely competent. People can’t help judging each other, so the last thing you want to do, is give them any ammo.