I know when we run an experiment and get results, we all want to run around share the news. And I’m not at all trying to tell you to not share the news. Definitely, write a paper, give a talk, make a poster, and pass the great news along.
However, and this is important, be careful how you word your results. Before you go out there and state that “a is better than b” take a moment to think about your experiment. Did you produce a new “fact” or are your results more a suggestion?
Let me start by presenting two examples of experiments with results. In the first experiment, let’s pretend you’ve discovered a new algorithm. To make it easier to compare, the algorithm was a new path planning method. All tests (and the theory) shows that it performs 5% faster than the current state of the art algorithm. In the second experiment, you ran a user study where participants answered questions comparing two different path planning algorithms about which one they thought produce more natural paths.
In the first case, the result of the algorithm being 5% faster is a fact, assuming that the comparison was done under the same conditions. Conditions that would falsify this, would be if the new algorithm was ran on a state of the art computer and the older one was run on a computer 5 years ago. That’s not a fair comparison. But, if you were comparing the number of nodes searched (common in search algorithms) and the new one, on average, accesses 5% less nodes, then having one run 5 years ago and one run today wouldn’t matter.
The second case, the results are pretty much never a fact. Instead, they should be referred to with language choices such as “the results suggest (or shows) that people prefer a to b” or “from the results, the evidence points to a as being more natural compared to b.”
So, how can you easily tell if you’re result is a fact or not? Well, a quick clue that what you discovered is not a fact, is that the result is from experiment using people. As people are unique, the best you can hope for from your results, is that they provide some good clues as to what is likely going to be the case. That generally speaking, 10% of people taking drug x will have a side effect; however it is possible that everyone does or that no one does. They can’t state with guaranteed certainty that 10% of people taking the drug will have a side affect. If your result does not change without outside interference (changing the algorithm, the components it is being run on, etc) then it is likely a fact.
In my current line of research, it is unlikely that I will ever produce “facts” as pretty much all of my testing will involve running user studies. So, instead, I will be adding the knowledge base of interesting things that are generally true about people, but can’t be guaranteed to be true about all people.
Do you work in facts? I have to say, I kind of envy you if you do. Because people always make things much trickier. But, on the other hand, working with people is bound to produce lots of interesting bizarre stories. And sometimes those are worth just as much as the actual outcomes.