It´s interesting how p-hacking is a good methodology in order to practice of exercising the researcher´s creative freedom in designing experiments in dynamic´s selecting the sample size. In this chapter the basic idea is when researchers perform a hundred experiments about 5 of these are likely to produce results that are statistically significant at the 5% level. But, What happen if we take as a benchmark all the experiments that lies statistically significant with a p-value of .10? This could be a a better measure? Thanks

# A Remark on Methodology: p-hacking

**Bernardo**#2

Hi,

I think that the real problem won’t disappear. What I understood is that the big problem is how they control de experiment to get what they want and not another thing. So, if now the p value turns to be .10, it would be easier to perform p-hacking. What do you think?

**Cutberto**#3

Hi Ivan and Bernardo,

I also belive that using a benchmark of 0.10 would make it easier to perform p-hacking. In a course of experiment design, the professor Alberto Simpser told us that the review committees of academic journals are demanding a detailed explanation of why they decided to use those samples, alternative ways of proving the same result (using another variables or indexes) and explaining if they omitted some variables and why. Also, he told us that some journals are publishing papers where their hypothesis are not proven (as opposed to the committees that only accepted papers that had stadistically significant proven hypothesis). These mecanisms are used to try to stop the practice of p-hacking.