pvalues vs bheta koefisien vs pearson for features importance

#1
Hello, I still confuse, the different concepts for feature importances.
For example: I have 6 variabel. 5 variable was predictor, and 1 variable was target (which predicted).
Suppose: X1, X2, X3, X4, X5 and Y

1. Find most important features by do pearson manually (X1 and Y, X2 and Y, X3 and Y, so on). Best pearson value=best importance variable.

2. Find regression. For Example, I get a equation:
b1(X1)+b2(X2)+b3(X3)+b4(X4)+b5(X5)+intercept=Y.
bigger bheta=best importance variable

3. Using OLS (Ordinally Least Squared), I get pvalues from each variable.
pvalues<0.05 => insignificance effect for target variable (Hyphotesis null is accepted).
smaller pvalues = best importance variable

The question is, which method better? How about concept for each method?
Thank you