Adjusting for repeat-test-exposure bias in diagnostic accuracy studies

I'm conducting a metaanalysis pertaining to diagnostic accuracy of an index test compared to the reference standard, and for my question on this thread, I've replaced all the medical jargon/details with statistical terms.

The majority of studies that I have found (and the ones that are included in my analysis) perform the index test first followed by the reference test for each patient. Then they state something like: X cases were diagnosed using the index test, and Y cases were diagnosed using the reference standard. Both index test and reference test are similar invasive physiological tests. The only difference is the patients position during testing. Previous studies have shown that each time the test (index test or reference standard) is repeated, there is a slight increase in the incidence of positive diagnoses. Since this effect of "repeat-test-exposure" has been quantified (ie. I know approximately how many additional patients are likely to have a positive test just due to the fact that the test was repeated), I would like to adjust my sensitivity (or detection rate) and odds ratio values in my metaanalysis to account for the fact that the reference standard was conducted after the index test.

I've searched all over and have been unable to find a clear method for making this adjustment. Does anyone here know the most appropriate approach to this issue?