The paper describes an experiment in which two groups of translators annotate Spanish and simplified Chinese MT output of the same English source texts (ST) using an MQM-derived annotation schema. Annotators first fragmented the ST and MT output (i.e. the target text TT) into alignment groups (AGs) and then labelled the AGs with an error code. We investigate the inter-annotator agreement of the AGs and their error annotations. Then, we correlate the average error agreement (i.e. the MT error evidence) with translation process data that we collected during the translation production of the same English texts in previous studies. We find that MT accuracy errors with higher error-evidence scores have an effect on the production and reading durations during post-editing. We also find that that from-scratch translation is more difficult for ST words which have more evident MT accuracy errors. Surprisingly, Spanish MT accuracy errors also correlate with total ST reading time for translations (post-editing and from-scratch translation) into very different languages. We conclude that expressions with MT accuracy issues into one language (English-to-Spanish) are likely to be difficult to translate also into other languages for humans and for computers – while this does not hold for MT fluency errors.