Help: working on a dataset on logaritmic scale

Hi, for a university project I currently have to analyze a dataset inherent to a cluster of stars, in which the two main variables are luminosity and temperature of each star.
The main goal is to validate the normality hypothesis in both variables (usually I go with >qqnorm() + >qqline() + >shapiro.test(), and based on the p-values I decide wether or not continuing the analysis with parametric or non-parametric tests).
Do I have to perform a transformation on each variable before testing them for normality? I was thinking about doing an exponential transformation but I'm not sure if that's a good move.


Less is more. Stay pure. Stay poor.
I think your approach is fine. I might throw in just a basic histogram as well and perform everything with and without a transformation (e.g., log). Many may frown on a formal test, since the null hypothesis is normality and the test can easily be overpowered if a person has a lot of data. So a high risk of rejecting the null of normality, which in your case could blur the results. Thus visualization and content knowledge are usually sufficient.