Total probability theorem in bayes theorem question

#1
Let it be assumed that you are working on a project in R. The project consists of two parts. The first part
is based on descriptive statistics and the second one is to model the data. The probability that there is an error
is the first part, is 0.13. As the second part is more mathematical so the probability of error in the second part is
0.38 independent to the first one. In addition
The probability that the program will crash when there is an error in the only first part is 0.6.
The probability that the program will crash when there is an error in the only second part is 0.92.
The probability that the program will crash when there is an error in both parts 0.79.

So here the most important thing to find is the P(crash).
So according to Total probability theorem : P(x) = P(x | y )*P(y) + P(x | ~y )*P(~y)

Now here is the confusing thing: How to calculate this P(x | ~y )*P(~y) which is P(crash | no error in first part) * P(no error in first part)

Now ! No error in first part could mean that there is error in second part or error in both first and second part? Isn't it so?
So according to that:

P(crash) = P(crash | error in first part) * P(error in first part) + P(crash |error on in second part) * P(error in second part) + P(crash |error on in both part) * P(error in both)
= (0.6 * 0.13) + (0.92 * 0.38) + 0.79*(0.13+0.38)
= 0.8305

Someone told me that total probability is summing all the possible outcomes. So should I include P(crash |error on in both part) * P(error in both) in that?
 
Last edited: