I think of Fisher's approach like this:

We choose a test statistic whose distribution is calculated under H0. H0 being a simple hypothesis of preference

We break down the distribution according to significance thresholds

The significance thresholds are defined by what we consider to be an extreme result, that is to say a result which would happen very little and therefore would put us in doubt on the veracity of H0.

We calculate the p-value which corresponds to the probability of obtaining a result at least as extreme as the realisation of the test statistic under H0. If the p-value is lower than the significance threshold we reject H0 otherwise we do not reject H0.

It is often said that Neyman-Pearson rigorously mathematized Fisher's significance tests. They therefore add the concept of alternative hypothesis and second-kind error. However, Fisher has long insisted on the reverse. I would then like to know if this is true. Can Fisher significance tests exist mathematically without an alternative hypothesis? (Other than by simple equivalence). If no? What is the mathematical use of the alternative hypothesis (Other than devine the second-kind error)? Because it apparently has an asymmetric role with H0 in the Neyman-Pearson tests. If yes ? So how did Fisher plan to deal with the concept of effect size and the impact of sample size on p value?

Sorry if my english is not very good, i am a french student.

Thank you