I'm testing writing 10 MB of data to a server. The expected result echoed back is 10485760 bytes. 160 writes of 65336 bytes. Approximately 1 of 5 tests I get back 10420224 bytes in the test. 65536 less than the expected 10485760 bytes. Is there a standard software engineering terminology describing a case where the same algorithm produces different results during tests?