As a computational modeller I am part of a group of people doing science in a way that was impossible only a few decades ago. A lot of computational modelling combines some of the features of theoretical work (finding out the essential elements of the reality that needs to be captured and creating a computational representation of it) with experimental work (using the computer model as surrogate of the reality to be studied to quickly and exhaustively use it for experiments). Here is an example where Ziv Frankenstein (working with +Alexander Anderson , +Simon Hayward , +Gus Ay and myself) has captured what we thought were some of the essential cellular and microenvironmental players in prostate cancer progression and used that to study how the microenvironment of the tumour can help explain how the tumour evolves.
Can (computational) models be trusted?
This week Aeon magazine published this piece by Jon Turney where he asks whether we should trust computational models at all. Computational models allow us to explore very complicated scenarios that would be impossible to study otherwise. The issue the article raises is whether we are beginning to rely too much on these computer models. This is a genuine concern, in some fields (I think +Artem Kaznatcheev might agree with me that social sciences could be one of those) experiments are really hard and the data scarce. This means that the computational models will have to be either too abstract (limiting the detail predictive power of the model) or risk having to make too many assumptions about aspects that are not clear (and thus leading to wrong predictions).
Bottom line: computational models are extraordinarily useful but depend on having good data and good understanding of what is being studied. We are much better of for having the ability to use them but be careful of detailed predictions when little is known for they are likely to be just guesses.