Method to choose the best predictor combinations??

#1
Hello,

I have a model, which trades forex, and the model has a lot of parameters. I run the model with a lot of parameter combinations (test combinations), and try to choose the best ones, where the output is ordered by the profit or the percent of winning trades, or any other qualifier.

An example of the data:
(P is a parameter)

P1 P2 P3 Profit
5 9 true 596.5
6 8 true 590.2
5 8 true 583.6
2 67 false 550
3 445 false 520.1
7 7 false 487.7
9 8 false 465.4
2 98 false 398

Here are only 8 combinations and 3 parameters, but I have hundreds or thousands of combinations and 10-14 parameters.
These are from test runs, and I want to choose the best group of combinations to use them in real runs.

So I need an ensemble of 10-30 combinations from the hundreds. This combinations should be the group of the bests, but they have to be far enough from each other. So for example (5, 9, true) and (5, 8, true) can be too close. But (4.8, 9.5, true) and (5.3, 7.6, true) can be ok. I need this to be able to deal with different situations during the run.

Question:

So any idea with what kind of math can I programmatically choose this ensemble of combinations from the test combinations???

Thanks
 

hlsmith

Omega Contributor
#2
I would look to the genomic literature for ideas since they examine countless genes and have to control for false discovery.

By combinations are you assuming interactions or do you just mean the best subset?? Feature selection on the large scale can also be examined with adaptive LASSO and elastic net models, when looking for best combination, but it may get a little more tricky (e.g., group options) if you think there are interactions.
 
#3
There is some interaction between parameters. I think I have to find some more easy way to get a subset. What you mentioned is over my level of knowledge in statistics, but thank you :)

My idea is really elementary, but simple to calculate.

1. A will choose the best parameter combinations ordered by the qualifiers (like sum profit or profit percent) .
2. Find less important parameters and throw out them
3. Because of 2. I will have duplicated combinations, so throw out duplicated combinations
4. For numeric parameters: order the sample by first numeric parameter and fit a polynom on this ordered parameter
5. Replace the values by the polynom values, this way my values will have more variability (eg: instead of 5,5,6,6,7,8 get 5.2, 4.8, 5.6, 6.6, 7.2, 8.3)
6. Make 4. and 5. for all numeric parameters
(because of the polynom during this process I can make more or less combination in the given range than the original "best parameter combinations")

Ok, if you are a statistician you can find this method silly (I see some weakness as well), but I hope this will work well. At least better than a single parameter trade.

But if somebody finds it really silly please tell :)
 

hlsmith

Omega Contributor
#4
Just for clarification in your above post, since it was unclear to me, it is standard practice that if you add a higher-order polynomial to your model that you also keep all of its predecessors (e.g., X, X^2, X^3,...,X^k) in the model unless you have good reason to exclude them.