Is explaining wrong models, lying?

With so much focus on model explainability, especially impressive attempts like - github.com/ModelOriented/DALEX - what happens when we ask for an explanation of a wrong model? If the NN's underlying equation is developed/reached-at is wrong (co-efs or form of polynomial or anything); then blindly/algorithmically explaining it is Machine Lying?

#doubt #machine-learning #ann