maybe I am misunderstanding you, and possibly I'm a bit provocative: are you saying that predictions without understanding (e.g. machine learning) is useless?
Because what you wrote sounds very much like when I told you that there is no understanding outside the world of pen and paper...
I wouldn’t say useless, even opaque ML methods are remarkably useful if they automate a task that is part of the scientific workflow but not critically related to understanding. Say that you want a sample of tidally interacting galaxies: you do not need to sift through thousands of images if you have neural net that does it for you. On the other hand if you give up your attempts at achieving direct human understanding of a phenomenon because you choose to automate that understanding away… you are like that medieval lord who didn’t have time to go to the toilet so he ordered his servant to go pee for him.
Hi Mario,
maybe I am misunderstanding you, and possibly I'm a bit provocative: are you saying that predictions without understanding (e.g. machine learning) is useless?
Because what you wrote sounds very much like when I told you that there is no understanding outside the world of pen and paper...
;-)
Leo
I wouldn’t say useless, even opaque ML methods are remarkably useful if they automate a task that is part of the scientific workflow but not critically related to understanding. Say that you want a sample of tidally interacting galaxies: you do not need to sift through thousands of images if you have neural net that does it for you. On the other hand if you give up your attempts at achieving direct human understanding of a phenomenon because you choose to automate that understanding away… you are like that medieval lord who didn’t have time to go to the toilet so he ordered his servant to go pee for him.
Perhaps relevant: Von Wright, Georg Henrik. Explanation and understanding. Cornell University Press, 2004. Cited by Pearl here: https://www.degruyter.com/document/doi/10.1515/jci-2018-2001/html