Hi Ivan and Bernardo,
I also belive that using a benchmark of 0.10 would make it easier to perform p-hacking. In a course of experiment design, the professor Alberto Simpser told us that the review committees of academic journals are demanding a detailed explanation of why they decided to use those samples, alternative ways of proving the same result (using another variables or indexes) and explaining if they omitted some variables and why. Also, he told us that some journals are publishing papers where their hypothesis are not proven (as opposed to the committees that only accepted papers that had stadistically significant proven hypothesis). These mecanisms are used to try to stop the practice of p-hacking.