I was thinking there was a manual for the base code as there are with most code. When I was learning how to do SQL I read books such as this. It tells you what the code is, how you use it, gives example. Or do you simply study the packages ?
I am sure there are many links here somewhere already.
There are a lot of ways you can analyze this. Do you know what specific measure he is using to determine if it is out of specification? My understanding is that specification limits not UCL/LCL determines if you are in or out of specification. UCL and LCL is used for being in or out of process...
Although this could be for me, it is really for a true programmer who wants to learn R.
I know we have sources for beginners. I was unsure where this has been moved to. Unlike me he will pick it up quickly. :p
You are welcome. I read it all the time, thankfully I rarely have to do it :p At least in economics you have theory to build on. In most disciplines there is not, from my readings anyhow, any thing like the level of theory that suggests whether logging for example is useful.
Doing something that demonstrates you actually have the skill they want is likely helpful. Just taking a course in college does not mean that one can do non-academic statistics (which tends to be a lot simpler than anything you do in academics, but which getting clean data is generally...
thanks. I guessed the data was in the file. I just copied and pasted the existing code in the link. I did not think I could link to the data in the Denmark file.
I did not know hlsmith that ~ does that. I was trying to figure out how they defined the model. But I don't think that there are...
How does R know where the data is in this code
First, we find the best ARDL specification. We search up to order 5.
models <- auto_ardl(LRM ~ LRY + IBO + IDE, data = denmark, max_order = 5)
Does it look for your WD for a file called Denmark?
I don't understand...
I think the correct answer to this is to consider the theory behind it. Many economic models do this, but they have a theoretical reason to do so. They change the reality in doing so.
For example in time series where the variance changes over time and stationarity is required, logging is used...
I use a MAPE to choose among ESM models and I am going to add ARIMA to the mix of models considered. So I would say yes if I understand your question. But you have to be careful in how you build these. ESM does not assume stationarity in many of its forms unlike ARIMA.
I don't know how you could or why you would want to do this. They operate from totally different logic :p I prefer exponential smoothing (ESM) because they are easier to do and have been shown to generate good results while being robust to violations of assumptions. In its original form they had...