I’m trying to determine the efficiency of the charging process, and see how much is lost in the process.
Two charges that I recently recorded:
13 ÷ 13,59 = 0,96 -> 4% loss
25 ÷ 27,46 = 0,91 -> 9% loss
How come? It’s enough that the 13 could be in fact 12.5 and the loss jumps to 8%. And if it was 12 kWh, from a ±1 kWh deviation, the loss would be 12%. So, small values increase the error. I have to prepare better tests for better data and measurements.
Edit. From the first full charge done from a starting point of 1% up to 100%, a month ago, one can see the following numbers:
78 ÷ 86,26 = 0,90 -> 10% loss
Using numerical analysis, we can see that the kWh meter of the Tesla Model S is not very accurate, its variation has a ±1 kWh possible deviation. From the wall meter we get a ±0,01 kWh possible deviation. This comes to the inferior and superior interval calculation for worst case as:
77 ÷ 86,27 = 0,89 -> 11% loss
79 ÷ 86,25 = 0,92 -> 8% loss
So it comes down to a possible loss between 8% and 11%.