In this post, I want to tell a little story about how a study can be negligent, and due to that negligence assert conclusions that should not be made. That study is titled “It’s OK if ‘my brain made me do it’: People’s intuitions about free will and neuroscientific prediction” by Eddy Nahmias, Jason Shepard, and Shane Reuter (2013).
This is not to say the study has no merit at all, or that some conclusion made did not have merit, only that there is a particular conclusion made that is based on a huge negligence.
In this study, the participants were given various scenarios in which 100% neuro-prediction or 100% prediction of what a person would do are granted. They were then questioned on things such as if the person had free will, or responsibility. The results were this:
Across three experiments we found that perfect prediction was not sufficient to undermine people’s attributions of free will or responsibility. These results held whether the scenarios described perfect prediction based on neural activity in the brain (Experiments 1 and 2) or on the basis of mental activity in the mind or soul (Experiment 3). In all three experiments, mediation analyses indicated that the differences in free will and responsibility attributions between scenarios were mediated by bypassing judgments, suggesting that on the ordinary understanding of free will, agents act freely only if their mental states have an effect on their actions.
This is all fine and dandy, and it for sure is the case that people will assign free will and responsibility regardless of 100% prediction (even if they should not). What this tells us is that common laypersons do not make a proper inference from prediction to “no free will”. This, however, does not mean that the free will belief they hold to is actually compatible with the 100% prediction scenario. But that is exactly the faulty conclusion that is made in the study. They bring up the criticism here:
The most significant response to our experiments is that many participants may be failing to understand or internalize relevant information from the scenarios. Perhaps people are so emotionally attached to having free will that they have a ‘‘free will no matter what’’ view and will refuse to say that some seemingly coherent scenario would take it away. Perhaps participants did not attend to the fact that every decision could be predicted with 100% accuracy.
Perhaps participants did not attend to the fact that every decision could be predicted before the agent was even aware of making their decision. If people failed to understand those features of the scenarios, then they may have failed to grasp the potential threat that neuro-prediction is supposed to pose to free will (e.g., that the technology rules out the existence or the causal role of a non-physical mind). In response, we first point out that our physicalist scenarios were based on a scenario offered by a prominent willusionist as just the sort of case that would lead people to clearly and appropriately envision what it would mean if people were fully governed by the laws of nature (Harris, 2012; also see Greene & Cohen, 2004).
And then address it here:
To make sure our participants fully understood the ramifications of the scenario, we stated three times that the neuroscientists could predict decisions with 100% accuracy, we stated three times that these predictions occur before people are aware of making their decisions, and we included a closing statement that highlighted that the experiments confirmed physicalism. Second, if most people have a conception of free will that conflicts with the physicalist scenarios (e.g., one that requires a non-physical mind or a form of uncaused agency), we should expect many more people to claim that the technology is impossible. Instead, we found that over 80% of participants thought that the technology was possible, and of the few who responded that the technology was not possible, most of them offered pragmatic or ethical reasons with no reference to free will, or minds or souls distinct from the brain. Finally, most people do respond that free will can be undermined in the case of manipulation or unknown potential manipulation, so they do not take a ‘‘free will no matter what’’ view. The most parsimonious explanation of the results is that most people accepted the possibility of the scenarios and simply do not understand free will in such a way that it conflicts with the possibility of prediction based on neural activity (see also Mele, 2012).
Instead, people might typically decide whether an agent acted freely by using something like the following principle:
An agent performed her behavior freely only if it was caused by factors that included her own reasons. As such, most people might have specific commitments regarding the basic capacities required to act freely while having no specific commitments regarding what underlies or explains those capacities. Willusionists, on the other hand, may not be so theory-lite. They tend to have specific views on what capacities are needed for free will and what underlies those capacities (e.g., Cashmore, 2010; Montague, 2008).
Here is where the negligence comes in. To suggest that people are using a “theory-lite” version of free will here that is actually logically compatible with the 100% prediction in the scenario neglects a previous study done in 2006. Rather, their metaphysical baggage surrounding free will “abilities” align far more with the “willusionists” (see the study for who is being referred to as a willusionist) understanding than is being let on by this analysis.
That other study was Surveying freedom: Folk intuitions about free will and moral responsibility. Nahmias, E., S. Morris, T. Nadelhoffer, and J. Turner. 2006.
Note that Nahmias is a key person on both of these studies, so he knew the first study all too well. In this study people were given a similar 100% prediction scenario, in which once again the majority said that the person still had free will and assigned responsibility (blameworthiness, etc.). This study, however, asked another key question. The participants were asked if the person who was 100% predicted could have done other than what was predicted. For the perceived “blameworthy” case, the study participants majority said yes, and this aligned with those who assigned “free will”.
Here is a past post about this:
Common Intuitions about Free Will (and how it needs to be defined)
The study I want to focus on is what I call the “Jeremy” study:
Scenario: Imagine that in the next century we discover all the laws of nature, and we build a supercomputer which can deduce from these laws of nature and from the current state of everything in the world exactly what will be happening in the world at any future time. It can look at everything about the way the world is and predict everything about how it will be with 100% accuracy. Suppose that such a supercomputer existed, and it looks at the state of the universe at a certain time on March 25, 2150 AD, 20 years before Jeremy Hall is born. The computer then deduces from this information and the laws of nature that Jeremy will definitely rob Fidelity Bank at 6:00 pm on January 26, 2195. As always, the supercomputer’s prediction is correct; Jeremy robs Fidelity Bank at 6:00 pm on January 26, 2195.
They were then asked to suspend belief about whether or not such could actually take place and were asked:
Regardless of how you answered question 1, imagine such a supercomputer actually did exist and actually could predict the future, including Jeremy’s robbing the bank (and assume Jeremy does not know about the prediction):
Do you think that, when Jeremy robs the bank, he acts of his own free will?
The majority said that Jeremy did rob the bank of his own free will. They were then asked the “otherwise” question:
In these cases, participants were asked—again, imagining the scenario were actual—whether or not Jeremy could have chosen not to rob the bank (case 6), whether he could have chosen not to save the child (case 7), or whether he could have chosen not to go jogging (case 8).
In the blameworthy variation, participants’ judgments of Jeremy’s ability to choose otherwise (ACO) did in fact track the judgments of free will and responsibility we collected, with 67% responding that Jeremy could have chosen not to rob the bank. However, in the praiseworthy case, judgments of ACO were significantly different from judgments of his free will and responsibility: Whereas a large majority of participants had judged that Jeremy is free and responsible for saving the child, a majority (62%) answered ‘‘no’’ to the question: ‘‘Do you think he could have chosen not to save the child?’’ Finally, in the morally neutral case, judgments of ACO were also significantly different from judgments of free will—again, whereas a large majority had judged that Jeremy goes jogging of his own free will, a majority (57%) answered ‘‘no’’ to the question: ‘‘Do you think he could have chosen not to go jogging?’’
The majority thought that Jeremy could have not robbed the bank, even though the scenario was a 100% prediction that he would. It’s important to note that this was asked in a way that cannot be conflated with a different kind of “otherwise” response such as in the colloquial, counterfactual, or epistemic usages here:
The Important Context of “Could Have Done Otherwise” (for the Free Will Debate)
The context of the study is clear:
- “It can look at everything about the way the world is and predict everything about how it will be with 100% accuracy.”
- “Regardless of how you answered question 1, imagine such a supercomputer actually did exist and actually could predict the future, including Jeremy’s robbing the bank (and assume Jeremy does not know about the prediction)”
- “participants were asked—again, imagining the scenario were actual—whether or not Jeremy could have chosen not to rob the bank“
Of course, this idea that he could have chosen not to rob the bank given the 100% prediction that he would… is… incoherent. It is not a “theory-lite” version of free will…it is the mucked up problematic ability that “willusionists” and free will skeptics say is incoherent given this scenario. The fact of the matter is, given the 100% prediction, Jeremy could not have decided not to rob the bank. It was not in his ability. It was not in his capability, even if there was no coercion, manipulation, brain-washing, or any other interaction of another person or technology forcing him to rob the bank. Per the predicted outcome, there is no real capability any more than a chess program is capable of making a move outside of its 100% predicted results. Consciousness can not help here…nothing can. That is what it means to have a 100% accurate prediction.
Also, it should be noted that there was an inconsistency between variations of this study, in which the wording was identical but the act that Jeremy takes changes. For things in which people assign blameworthiness over a wrongdoing, the majority (67%) assign a “could have done otherwise” attitude, where-as, more say the person could not have done otherwise for a seeming praiseworthy or neutral case where they don’t need to attach blame (62% and 57%). This shows an inconsistency and a willingness to assign extraordinary abilities in order to blame someone for wrongdoing. This type of inconsistency crystallizes the types of biases people have and how flexible the “free will” abilities they grant are, depending on the scenario. This is anything but a “theory-lite” version of free will.
In a separate case called the “Fred and Barney” case, people tracked an “otherwise”assessment regardless of if it was a wrongdoing or good deed (keeping or returning a wallet) – at an even higher percentage (76%). This one didn’t entail prediction, but it did entail an entirely deterministic scenario. You can see the scenario here:
The conclusion is that people have a problem making a logical connection or inference, not that they do not have incoherent notions attached to the free will abilities they think exists.
This is where the neglect of the 100% neuro-prediction study comes in. Knowing this, Nahmias needed to ask the participants if they thought the person “could have done other than what the 100% prediction said”, in which case, given this prior study, the answer for any perception of blameworthiness case (and potentially others) would almost certainly have been a resounding “yes” – and this whole “theory-lite” conclusion would be shown for the fiction it is.
I suspect that Nahmias knows this, and that is why he opted to leave that question out, and also leave out wrong-doing. After all, some of his conclusions don’t seem to be partial, as I explained in my post about the 2006 study. Even if, however, it was just an oversight, it is a huge oversight that ties into the conclusion made, and in turn allows compatibilists to cite this source as evidence that what people “really” mean by free will is some more “theory-lite” version that does not have the problems that willusionists and other free will skeptics such as myself suggest. Of course, as shown by the 2006 study, this is anything but the case. Regardless if this was purposeful negligence or accidental negligence, it was negligence and should give people doubts on the reliability of the people conducting the study.
This study should be replicated (hopefully by a separate unbiased party), except with asking the question of whether the person “could do other than what the 100% prediction predicted, at the time of decision”, especially for cases of perceived wrong-doing. If they say “no” to this and still denote “free will”, then we can talk about a “theory-lite” version that common laypeople might assess. Of course, this wouldn’t take away from the implications of not being able to have, of one’s own accord, done otherwise, and what that means for strong responsibility / blameworthiness:
We already know that people assign the strong sense here, and that is a big part of the problem. This brings me to another problem, the word “responsibility” is too ambiguous. Questions over if the person “deserves punishment” (regardless of utility) should be (indeed NEED to be) asked.
Where the study is correct: This study does seem to be correct about one thing. The idea that neuro-prediction will undermine one’s beliefs about free will and responsibility that some williusionists claim could be a mistake. I think, however, what most are really saying is that it “should” undermine people’s belief in free will and responsibility, if they are to stay coherent – not necessarily that it will for those laypersons who are not adept at critical thinking. It is just that coherence and an ability to think critically is not a large staple for the common layperson. It should be no surprise that they do not, but this does not imply what they do believe in is some coherent “lite” version. That would be a mistake: it’s a very convoluted, sticky version that can mean different abilities at different time, as studies like Folk Intuitions on Free Will (Shaun Nichols) explains:
“In different conditions, people give conflicting responses about agency and responsibility. In some contexts, people treat agency as indeterminist; in other contexts, they treat agency as determinist. Furthermore, in some contexts people treat responsibility as incompatible with determinism, and in other contexts people treat responsibility as compatible with determinism.”
Also, per 5 different studies, the belief in free will is also in ways linked to the desire to punish wrongdoers: Free to Punish: A Motivated Account of Free Will Belief
“Across 5 studies using experimental, survey, and archival data and multiple measures of free will belief, we tested the hypothesis that a key factor promoting belief in free will is a fundamental desire to hold others morally responsible for their wrongful behaviors. In Study 1, participants reported greater belief in free will after considering an immoral action than a morally neutral one. Study 2 provided evidence that this effect was due to heightened punitive motivations. In a field experiment (Study 3), an ostensibly real classroom cheating incident led to increased free will beliefs, again due to heightened punitive motivations. In Study 4, reading about others’ immoral behaviors reduced the perceived merit of anti-free-will research, thus demonstrating the effect with an indirect measure of free will belief. Finally, Study 5 examined this relationship outside the laboratory and found that the real-world prevalence of immoral behavior (as measured by crime and homicide rates) predicted free will belief on a country level.”
Another problem for the 2013 study is that it offered no moral-wrongdoing scenarios, when it is these very scenarios that highlight the metaphysical baggage people inject in – in order to be able to blame. It matters not that these would be emotionally charged, it shows the “ability” they require in order to fulfill the “deserving/blame” type of responsibility they desire.
The fact of the matter is, the “free will” abilities that people hold to range from consistent, to inconsistent, from coherent compaibilist notions, to incoherent compatibilist and libertarian notion, and so on, usually depending on context, a psychological need to blame and punish, a desire to justify one being more or less deserving than others (inequality), and so on. And the more “free will” belief the more retributive the ideas about punishment are: Free will and punishment: a mechanistic view of human nature reduces retribution.
“Study 1 found that people with weaker free-will beliefs endorsed less retributive, but not consequentialist, attitudes regarding punishment of criminals. Subsequent studies showed that learning about the neural bases of human behavior, through either lab-based manipulations or attendance at an undergraduate neuroscience course, reduced people’s support for retributive punishment (Studies 2–4). These results illustrate that exposure to debates about free will and to scientific research on the neural basis of behavior may have consequences for attributions of moral responsibility.”
Note, again, that the “moral responsibility” being referred to is the stronger sense denoted here:
This is important because as long as the inconsistent, incoherent, and dangerous notions about free will are mixed in, we shouldn’t be pretending people are adhering to some consistent “theory-lite” version only.
Studies aside, as they all should be taken with a grain of salt given the ease of mistakes and unintentional or biased negligence – I also think it obvious that people hold more incoherent ideas about free will that lead to poor notions of “strong responsibility” that are harmful, and that can be denoted by the state of the world, the capacities to blame, ideas about free will and sin (on the religious side), and so on. This study, however, is too negligent and needs to be repeated with appropriate parameters that take into account these other studies.
I have more to say about this study, for example, the emphasis it places on the fact that if a person is manipulated their ideas about free will get trumped. This is all too obvious, but what takes away free will intuitions has no say in what the conception of free will is. That is because “free will” is an umbrella term, and disqualifiers are not the whole story. For more info on that read here:
Or how about the flakey term “responsibility” that is extremely ambiguous. The problem with not using scenarios where “blameworthiness” and being “deserving” are not assessed is that the notion of responsibility (or even morally responsible) could be conflated. See here for why this word is ambiguous:
I have a feeling I will be revisiting this study in a future article to point out further flaws and problematic ideas or conclusions in it. For now, just understand that just because people’s ideas about “free will” are unphased by 100% prediction / neuroprediction, does not mean they understand that alternate possibilities are out given the 100% prediction – especially in perceived “blameworthy” cases. I know that sounds crazy, but it just shows the lack of inference or critical thinking involved, and points to a free will ability that is anything but a “theory-lite” version as this study suggests.
~
UPDATE: A reader has brought to my attention that the work on free will by Eddy Nahmias is funded by the Templeton foundation. This foundation has various biases. Here are some posts by evolutionary biologist Jerry Coyne about this:
To quote Coyne:
“Templeton cannot, of course, mandate the research findings of its scholars (one of the studies it funded, for example, showed no effect of intercessory prayer in curing heart disease), but it clearly steers money towards projects it likes, and rewards those who produce the desired results with additional grant money. And everyone knows that: to stay on the Templeton gravy train, you have to get the results that it likes.”
The Templeton foundation is a good example of how money can corrupt, and how assessments can “lean” in the direction that the money is funneled from. One should be at the very least skeptical of assessments made by others funded by this foundation.
Sources
Folk Intuitions on Free Will* – Shaun Nichols, 2006
Surveying freedom: Folk intuitions about free will and moral responsibility. Nahmias, E., S. Morris, T. Nadelhoffer, and J. Turner. 2006.
It’s OK if ‘my brain made me do it’: People’s intuitions about free will and neuroscientific prediction – – Eddy Nahmias, Jason Shepard , Shane Reuter, 2013
Free to Punish: A Motivated Account of Free Will Belief – Cory J. Clark, Irvine Jamie B. Luguri, Peter H. Ditto, Irvine Joshua Knobe, Azim F. Shariff, Roy F. Baumeister, 2014
Free will and punishment: a mechanistic view of human nature reduces retribution.
Shariff AF1, Greene JD2, Karremans JC3, Luguri JB4, Clark CJ5, Schooler JW6, Baumeister RF7, Vohs KD8.
'Trick Slattery
Latest posts by 'Trick Slattery (see all)
- Semantic Shift Day, August 31 – REMINDER! Mark your calendar! - August 1, 2020
- Debunking Dennettian Diatribe - May 2, 2018
- Compatibilism: A Parable (comic) - December 7, 2017
71 Responses to “The Negligence in a Study: It’s OK if ‘My Brain Made Me Do It’”
Sorry, the comment form is closed at this time.
While it is a kind of logical bias, my bias against work funded by the Templeton foundation, does exist.
Eddy Nahmias seems like he accepts grants from the foundation. Alfred Mele is another.
I can’t help thinking the foundation funds people and projects that will have a friendly outcome. The outcomes approach 100 % predictable 😉
I knew Mele did, but I hadn’t realized that Nahmias may be associated as well. Verrrrry interesting!!! Thanks for that tidbit of information, I’ll have to check into it. I think the Templeton foundation is extremely biased.
I wonder if there is a correlation between the outcome of free will studies and the Templeton Foundation. Vohs also receives their monies which is often cited . But apparently it was part of a replicability study.
http://blogs.lse.ac.uk/impactofsocialsciences/2013/04/19/pre-publication-posting-and-post-publication-review/
http://www.nytimes.com/interactive/2015/08/28/science/psychology-studies-redid.html?_r=1
and a discussion thereof (first of three blog entries).
https://rolfzwaan.blogspot.ca/2013/03/the-value-of-believing-in-free-will.html
Yeah, the “Free Will and Cheating” study of Vohs and Schooler didn’t hold up to scrutiny per the NY Times (as you mentioned):
Three Popular Psychology Studies That Didn’t Hold Up
I also mention that one here and some of the problems with it:
A Temporary Imposed Lack of Belief in Free Will? Seriously?
Also problems with the FAD/FAD+ scale used in this and many other free will studies:
Problems With The Free Will and Determinism Plus Scale (FAD-Plus)
I’m very interested in these studies, but I want them done correctly (and replicated). We need to separate out the distinction between a belief in fatalism, randomness, and determinism as well. This is why I think it’s so important to educate people on these differences. For example:
Determinism vs. Fatalism – InfoGraphic (a comparison)
You have said you are writing a book about morality in the absence of free will. (please correct me if I am wrong here).
I can’t help thinking this is not a wise move from a logic point of view. I can’t help thinking this falls into the same trap that compatibilists fall into in this debate.
Since losing my belief in free will, I have strived to live an amoral life … and I too would argue this is a betterment (though I was not keen on the word).
rom
I think a case can be made for a type of forward-looking ethical realism without backward blaming moral responsibility. For connotation reasons I prefer the term “ethics” over morality (even if I use them interchangeably). But I’m not an ethical nihilist (I think there is ethically relevant “value” intrinsic in states).
Within a perfectly deterministic universe, we still have to distinguish between the case where (a) the person acts deliberately (of his own “free will”) versus the case where (b) the person is forced by someone else to act against his will. This commonly understood definition of “free will” makes no supernatural claims and no assertion of “freedom from causation”. Yet it is sufficient for all practical purposes.
My most recent post explains why that is not the A) free will of practical importance B) free will that addresses layperson intuitions, and C) free will that addresses the historical debate. So it is anything but “sufficient”…it is entirely lacking.
On The Practical Importance of the Free Will Debate
Can’t infer “no one could do other than what the 100% prediction predicted, at the time of decision.” What a person *will* not do, no matter how definitely, is not the same as *could* not do. It could not be the case that { (A) the laws of nature are L1, L2,… and (B) the past facts are P1, P2, … and (C) the person does other than A at time t}. We cannot infer: It could not be the case that the person does other than A at time t.
The inference is there: If the person does other than A at time t, the 100% prediction that they will do A at time t can no longer be a 100% prediction and that premise (of it being a 100% prediction) is contradicted. They “could” not without contradicting the premise of the 100% prediction they “will” not. No modal scope fallacy.
Suppose, in the scenario where the person does not-A at t, the predictor would have predicted not-A. Then the predictor and all its predictions could still be 100%. No contradiction.
“It can look at everything about the way the world is and predict everything about how it will be with 100% accuracy. Suppose that such a supercomputer existed, and it looks at the state of the universe at a certain time on March 25, 2150 AD, 20 years before Jeremy Hall is born. The computer then deduces from this information and the laws of nature that Jeremy will definitely rob Fidelity Bank at 6:00 pm on January 26, 2195. “
Given that prediction, could Jeremy have chosen not to rob the bank at that time?
Narrow scope of “could”, Jeremy could have. Broad scope, no it could not have been the case that {the computer made its predictions based on (cite specific facts here) and Jeremy had chosen not to rob}. Your phrasing suggests broad scope, so then, no.
This is the phrasing/scope of the study….and you are right: “no”
“…imagine such a supercomputer actually did exist and actually could predict the future, including Jeremy’s robbing the bank (and assume Jeremy does not know about the prediction)”
“Could Jeremy have chosen not to rob the bank”
If you lop off the “Given that prediction” part, then the phrasing is more suggestive of narrow scope. In surveys, exact phrasing often matters; but of course I don’t know if that’s true here.
Sure, but for this particular study, they go out of their way to make the prediction part very clear for that very reason:
…including Jeremy’s robbing the bank (and assume Jeremy does not know about the prediction)…”
Yet people still say that Jeremy could have decided to not rob the bank. This is a big problem. I do understand your concern over the context of the words “could have”.
I disagree with your taxonomy of “coulds”. What you call “ontic” are modal. Ability statements are modal and, unless otherwise specified, default to being about just the object or person in question, not all the surrounding conditions. (Can this car do 100?) Which makes them “iffy”. (If you floor it.)
This simply is not true when the modality is about counterfactual non-abilities for ontological reality. “Can a car do 100” falls under either the counterfactual (IF the car is driven in such a way, can…) or the epistemic usage in the post I linked. The ontological reality is that a car in a garage that is never nor ever will be driven (due to antecedent causality in a deterministic universe) could never have DONE 100, even if it is a Venom GT with no broken parts, because the surrounding conditions are a part of the assessment for an ontological “ability” (something that can really happen). This is why context is important.
The ontological reality is that the Venom GT can do 100 – this ability is a bundle of dispositions that, sadly, will never be triggered. That’s what “can” means as applied to cars. What does “can really happen” even mean on your view, if not “does happen”, and how does science discover it?
The ontological reality is that the Venom GT can only do 100 IF it can be triggered to do 100. If it cannot be triggered (for any reason) it cannot do 100 in reality. “Can really happen” means that it can be actualized as a real space-time event. This is why ontological possibility differs greatly from epistemic possibility or counterfactual analysis and why context matters.
We agree that there are “can” statements about the GT that are true; you call them counterfactual, but any “can” covers multiple scenarios, even when an actual one suffices to satisfy it. We agree there are “can’t” statements which mention the GT and other factors, which are also true. No reason has been given to call one uniquely “real”. An explanation re-using “can” doesn’t help much.
I’d suggest that it is untrue that “any can covers multiple scenarios”, when under causal determinism multiple scenarios “cannot” – only one “can” (in the real sense of having the ability to be actualized in the world). The reason they are either counterfactual or epistemic is that they require an implicit “if”, and if that “if” is ontologically impossible (cannot be actualized in reality given determinism) then the reality is that those scenarios “cannot” be actualized either.
By “covers multiple scenarios” I meant: multiple scenarios are relevant to its truth or falsity. Bad phrasing, sorry. Careful specification can pick out a system which includes a human and which system “can” only do one thing. But system-wide specification isn’t scientifically privileged and isn’t relevant to human decision making.
At this point I’m unclear on what you mean by “careful specification” vs “system wide specification” (the links do not offer clarity here). Do you really mean specification vs generalization (and lack of knowledge about the future)? Also, don’t know why “human” would make a difference here.
To be clear on what we are addressing in the above: There is *specification* of causally deterministic scenarios (even absolute 100% predictions) yet people still denote “otherwise” abilities that fall outside of the specification. 😉
System-wide specification is key; one way to specify the system could be to consider everything within a light-year of the bank that Jeremy will rob. This *system* can’t terminate in an un-robbed bank. But the people in the surveys didn’t say the *system* could do so.
The majority of people in the survey are actually saying that the 100% prediction (by the 100% accurate “system” that they were supposed to imagine true for the scenario) of the robbed bank could be wrong. That “Jeremy could have chosen not to rob the bank”. This is pretty straight-forward from where I sit.
We’re back to that modal scope fallacy you say you avoided. If Jeremy could have chosen differently, that doesn’t imply the computer would make a wrong prediction, it implies the computer would have predicted that other action. Yes, that’s modal reasoning, but there’s no way to ground “necessity” without modality.
Naw…we are really just back to you suggesting a fallacy that does not apply (a fallacy that only applies to a strawman). The prediction is the accepted “contingency”, not “necessity”.
Modal reasoning that suggests a different prediction cannot be used once the 100% prediction is given and asked to be accepted (which it is for the scenario). To suggest the machine could give a different prediction than what is in the scenario is completely outside of the given / accepted scenario (the contingency).
So, when survey subjects said Jeremy “could” have avoided robbing, are you interpreting that as an epistemic “could”? I.e. “as far as I, survey subject, can tell from the given info, it might or might not happen”? That would certainly be a mistake, but I don’t think that’s what they mean.
BOTH a modal assessment AND a modeless assessment of “otherwise” would be mistaken.
They were given a contingent constraint to assume (an assumption enforced more than once).
1) If they do not assume the constraint and modally go outside of it for an “otherwise” (as I think you are suggesting) – that is a mistake.
2) If they assume it and still say an “otherwise” – that is a mistake.
Either way, it is a problem with how lay-persons reason for this subject/question. Now I intuit that 2 is probably happening (not 1), but either way, there is an issue none-the-less.
The question asks whether or not Jeremy could have chosen not to rob the bank. That’s a narrow-scope modal question. The natural and charitable interpretation of answers is as narrow-scope modal answers. The survey didn’t ask, “if you answered yes, then what about the prediction?” but the obvious reply is that the action is evitable, and the prediction inevitably mirrors it.
A modal question does not assume that any answer is reasonable or logically consistent. “Yes” is not a reasonable answer once the scenario is given. There is not a .0000000000000000001% chance that Jeremy could have decided not to rob the bank if the prediction he would rob it is 100%. This also is a modal question:
Assuming there is only a single six sided die that has 1 through 6 (no 7s) on the sides, could that die land on a 7 for a single roll?
Yes – it could if it was a 7 (or more) sided die with a 7 on a side is not a reasonable answer. It is outside of the scope to the given scenario. Here are some reasonable modal answers:
No.
No, landing on a 7 is not possible in that scenario.
No, a six sided die with no seven cannot roll a 7.
I don’t know.
A die having 7+ sides would not be *that* die. It would be another. Thus, a narrow-scope “could that die roll a 7?” gets a No. But a Jeremy that didn’t rob a bank could still be Jeremy. To get an inevitably robbed bank, we need to ask a wide-scope modal question, including additional features of the scene into the scope of the “could” question. See also “Dispositional Compatibilism” at vihvelin.com
People thinking there is a difference here is a part of the problem. A 100% prediction machine that predicts that Jeremy will not rob the bank would not be *that* 100% prediction machine that is to be assumed (no different than *that* die). The Jeremy that didn’t rob the bank would not be *the* Jeremy who is in *the* ONLY universe logically compatible with *the* 100% prediction machine *we must assume* exists for the scenario. The mistake is thinking that a 7 sided die would be “more different” than A) 100% prediction machine that predicts Jeremy does not rob the bank OR B) a non-bank robbing Jeremy OR C) a universe where the different prediction machine and different Jeremy exist. Absolutely no different than a universe in which “the die” has 7 sides.
That isn’t correct. Take the spatial region consisting only of the die, and consider the laws of physics that apply to it: it’s already clear that the die never rolls a 7. Embed it in any larger context, it doesn’t matter. But take the prediction machine. Embed it in various contexts: that *does* matter: the prediction varies. Similarly for Jeremy, the action varies by context.
The ONLY distinction is that there being a 7 sided die (rather than a 6) is more obvious of a configuration difference than there being a different internal configuration of the 100% prediction machine….but in both cases there is a different atomic configuration. The prediction doesn’t just vary with the same configuration, the configuration varies. Again, this is no different to the point of this discussion….which is that the context is given (it cannot be embedded in other contexts). Same with Jeremy.
Again, the context is given, but it is not included in the scope of the “could” as phrased in the survey question. With regard to internal configuration, the prediction machine existed before it got evidence for its prediction of a robbed bank. At that early time, its configuration allows various predictions depending on context.
Yes, the context is included in the scope of the “could” as phrased in the survey question:
“… imagine such a supercomputer actually did exist and actually could predict the future, including Jeremy’s robbing the bank (and assume Jeremy does not know about the prediction)” —> the “could” question.
The scope of the question is not an epistemic one about prior to the prediction. It is perfectly clear that the scope purposely places the 100% prediction machine predicting the bank rob, assuming that is true, and then asking the question under that assumption.
That quote is only a statement, not the question. It lists did-happens, not had-to-happens. Of course *some* subjects may have interpreted these preparatory statements as included in the scope of the “could” – and said Jeremy couldn’t. Perhaps it would help to imagine another survey, in which events after the robbery are given as context. There’s a run on the bank, which would be 0% probable without a robbery. Could Jeremy have avoided robbing?
The “did-happen” is the prediction of Jeremy robbing the bank by a 100% accurate predicting machine – and the participants are being asked to accept that specific “did-happen” when answering the question. It doesn’t matter if the back-story scenario is not the question, the question is in reference TO the acceptance of the scenario.
But consider my alternate survey, in which the run on the bank 100% retrodicts Jeremy’s robbery. (No other robber is so infamous as to trigger a run on the bank.) A *bigger* majority of subjects would answer my survey “Jeremy could have avoided,” yes? But you wouldn’t accuse them of ignoring or misunderstanding the scenario, would you?
Edit, to clarify, in my alternate survey, there is no prediction, only the retrodiction.
One could technically invoke indeterminism into your scenario. If, however, you also included that they are to assume A) that the universe is entirely deterministic and B) the initial conditions that precede the retrodiction are the same….and they still say “yes” – we’d have a similar problem to the 100% prediction.
In my scenario, the universe may as well be completely *necessary*-cause deterministic. But that’s overkill: Jeremy’s robbery is a 100% necessary cause for the bank run; that’s all you need to know. The scenario states that Jeremy did the robbery, and that the bank run occurred, and that the latter cannot occur without the former. If subjects given my scenario say Jeremy could have refrained, is that a mistake?
If you say “assuming the perfect retrodiction 100% of the time is such that it shows that Jeremy robbed the bank” and then ask – could Jeremy have not robbed the bank? At that point “YES” – it would be a mistake. This would be a little more analogous to assuming the 100% prediction that Jeremy will rob the bank. The main distinction is that you MUST backtrack prior to the retrodiction for the “could” question but you should not backtrack prior to the prediction for it.
Well, at least you’re consistent. But you’re consistently wrong. Unless the question explicitly scopes the retrodiction and the robbery both under the could: “Could the reliable retrodiction be given and Jeremy not rob the bank?” Failing that, you’re just imposing your own personal grammar/pragmatics interpretation, and misinterpreting the answers of any subjects who, quite reasonably, read it differently.
When the convo starts going down the (ironic btw) sarcastic jabs rabbit hole, it is perhaps time to call it quits with the ol’ “agree to disagree” mantra.
As I said, your analogy is not analogous. The 100% prediction of the bank rob is [undeniably] to be assumed for the question, and therefore it IS explicitly tied to the “could” question. Denying that (which is what you are attempting to do) is to impose your own separation of the (repeated) scenario from the question. To suggest that people “read this differently” is simply to suggest that people are unable to comprehend what it says – not that they have a “reasonable interpretation” at all.
You said that my clarified survey would be “a little more analogous” but you didn’t say what disanalogy remains. The original survey wording “imagine such a supercomputer actually did exist and actually *could* predict the future … Do you think that, when Jeremy robs the bank, he acts of his own free will?” Emphasis added. It does not specify what the prediction was as part of the question.
=======
Edit, oops, I ellipsis’d over the main supporting fact of your argument, which is unfair: “could predict the future, including Jeremy’s robbery” But still, I think it’s reasonable for subjects to backtrack. The computer *could* predict the future including robbery, or it *could* predict the future had it included no robbery. The “could” language suggests the particular prediction is open to circumstance, not a given.
======
So to clarify, I was wrong to say it didn’t specify what prediction was given. But it’s still a matter of interpretation whether that particular prediction gets scoped under “Jeremy could have…”.
Allow me to quote the whole scenario below for FULL context:
==============================================================================
“Imagine that in the next century we discover all the laws of nature, and we build a supercomputer which can deduce from these laws of nature and from the current state of everything in the world exactly what will be happening in the world at any future time. It can look at everything about the way the world is and predict everything about how it will be with 100% accuracy. Suppose that such a supercomputer existed, and it looks at the state of the universe at a certain time on March 25, 2150 AD, 20 years before Jeremy Hall is born. The computer then deduces from this information and the laws of nature that Jeremy will definitely rob Fidelity Bank at 6:00 pm on January 26, 2195. As always, the supercomputer’s prediction is correct; Jeremy robs Fidelity Bank at 6:00 pm on January 26, 2195.”
“imagine such a supercomputer actually did exist and actually could predict the future, including Jeremy’s robbing the bank (and assume Jeremy does not know about the prediction)”
================================================================================
When it says “imagine such a computer actually did exist” it is referring to the computer that made the prediction in the scenario, not some other version that did not make the prediction.
Also, the distinction (the disanalogy) between your scenario and the one above is that yours is that of necessary causality and the prediction is sufficient. Necessary causality need not lead to the same outcome on “rewind”, sufficient causality must. Also the prediction is to be assumed. This is why I adjusted yours to prevent any change in retrodiction (making it more analogous)…just as there can be no change in the above prediction if we are to imagine the computer in the actual scenario.
FYI – I didn’t see your edit comments when I responded, but my comment addresses them none-the-less….so I tacked them onto your other comment for proper order. To reiterate my point: The prediction is not open to interpretation and the “could” question is in reference to it.
Necessary causality always leads to the same outcome on trace-back, just as sufficient causality always leads, on trace-forward. Yet, more subjects will “forward-track” on my survey, saying Jeremy could have refrained and the retrodiction would then be different, which by your logic should be prohibited just as backtracking is verboten on the original. We should probably agree to disagree about the difference between background facts for a question and items scoped under its “could”.
For a “could have done otherwise” question – we bring back to before the action took place… and then denote if (forward) playout “could be different”. On necessary causality (your retrodiction scenario) it “could”, on sufficient causality (the prediction scenario) it “could not”. Unless you are looking to play the universe out “in reverse” (backward) from the retrodiction to the action (in which a backward playout could NOT play out otherwise given necessary causality), it is disanalogous.
Indeed – we should agree to disagree on whether one can make up their own referents for the “could” question or not.
Yes, that’s exactly how subjects evaluated “could have done otherwise”. And the reason they get to dismiss the retrodiction in my scenario, is that it’s *dependent* on Jeremy’s decision. And the reason in general that people would bring back to shortly before the action, is they think that’s sufficient to remove all such dependents. Remaining facts are “fixed”. Except, in prediction scenarios, some subjects begin to doubt that, and to backtrack instead.
But it is not sufficient to remove dependants for the prediction scenario, so their thinking is flawed.
If brought back to right before Jeremy decided (or even a year before he was even born) and re-played…
Prediction scenario: Jeremy’s decision could not be otherwise.
Retrodiction scenario (assuming no sufficient causality): Jeremy’s decision could be otherwise.
The retrodiction scenario was meant to foreground your time-asymmetric rule for evaluating “could have done otherwise”. I say that subjects usually follow your rule, but sometimes they don’t. However “wrong” they may be, they think more like Yudkowsky: http://lesswrong.com/lw/rb/possibility_and_couldness/ They needn’t deny determinism.
Yudkowsky is merely back at counter-factual usages that would fall outside of the “prediction” scenario: “You could eat the banana, IF you wanted. And you could jump off a cliff, IF you wanted.”
This is no different than “Jeremey could have not robbed the bank, IF the 100% prediction said he did not” or the “die could have rolled a 7 IF it was a die with more than 6 sides”. Unfortunately, that “IF” is not the scenario. We are just full-circle back at stressing the importance of CONTEXT.
OK, I get that you think the counterfactual usage is wrong (at least in this survey), whereas I think it is the only usage that is ever relevant to ability questions. But even if I and all the survey subjects who use that approach are likewise wrong, still it’s a *different* mistake than thinking causality doesn’t apply to Jeremy.
Counterfactuals could be relevant to ability questions – as long as they are not a counterfactual of the given scenario that is asked to be accepted (e.g.”IF” other than the scenario). I also have much doubt that laypersons are thinking “Jeremey could have not robbed the bank, IF the 100% prediction said he did not” – but we don’t really know what exactly is in their minds (would be interesting to follow up with a “why” question). Another study shows that many laypersons use both deterministic and indeterministic responses depending on the question. On a perhaps related tangent – I also argue that a lack of sufficient causality is incoherent for an entirely causal account.
Perhaps people see the counterfactual arrow pointing the other way: IF Jeremy had not robbed, THEN the prediction would have said he wouldn’t. Again, however “wrong” that might be. Also, note that in my retrodiction, subjects who forward-track are taking a counterfactual “of the given scenario” insofar as they counter-fact away the retrodiction.
Perhaps – my point is that no matter how they are coming to their conclusion, it is a problematic conclusion given the scenario. 😉
[RE: RETRODICTION] This is why I initially said that this would be more analogous to the prediction scenario: “assuming the perfect retrodiction 100% of the time is such that it shows that Jeremy robbed the bank”…because for the prediction scenario the prediction is to be assumed (and cannot be “counter-facted away” without changing the assumed scenario).
Certainly, if we use your rule for evaluating “could have done otherwise” (CHDO) then the backward-pointing-counterfactual-arrow thinking is problematic. On 100% retrodiction, that’s how I was thinking of it, but I failed to make that clear early on, my apologies. And I also apologize for that silly “consistency” dig. I think the CHDO rule is fallible, but that’s a subject for another day.
No apologies necessary, I actually appreciate your thought process in our discussion very much, and you kept it civil which is even more important for disagreement. This discussion “caused” some interesting thought on my end – that is for sure. I’m sure we can go on and on debating CHDO…but you are right, another day. 😉
SIDE TOPIC
Regarding Vihvelin, there is a lot wrong with her assessments and I should probably write a full post at some point specifically on her position, but this comment is going to briefly summarize some problems I see right off the bat (though there are far more to address) – and since I’m dealing with a link to a lot of content I’m breaking my short comment rule this time in order to address the link. If you want to stay conversational, it would be better to address a specific point rather than link with a “see X”. Say something like: “Dispositional Compatibilism makes this point…” (you can link then) and that would give us something to haggle back and forth on. On to the link in general:
First, for “determinism”, Vihvelin starts by conflating the important distinction between sufficient and necessary causality – in which we only need sufficiency for determinism. Her first logic board addresses necessary (rather than sufficient) causality.
She then addresses (IF) counterfactual assessments (in light of the above) that do not apply to the point of determinism. Whether B or C are the case is equally causally dictated (given determinism). Counterfactuals do not free “free will” from the grips of determinism. In a deterministic universe, if B happens and C does not, given the same initial conditions of the universe, B had to happen and C couldn’t have happened. Counterfactuals that address an otherwise miss the point.
Regarding her so called “dispositional compatibilism”, once again the layperson beliefs are not logically consistent (with nature) as this person claims. She’s simply wrong on the “compatibility” between people’s “dispositions” and the “natural world” here. No one doubts we have the causal powers she proposes, or that people have counterfactual dispositions, the issue is in regards to other powers that people think they have… being inconsistent with reality – allowing for things such as retributivism which is pervasive in free will thinking as studies show.
She is one of those compatibilists that are part of the problem, where as compatibilism doesn’t have to contrive like that. For her to conclude “we’ve got the free will we think we have” is blatantly false as the above post shows in numerous ways.
That being said, thanks for sharing the link. At some point I will have to fully debunk it. Now back to our regularly scheduled discussion leaving off on the “die roll” analogy.
END SIDE TOPIC
500 chars is not enough to reply, and anyway, I gave the reference just so you could see where I’m coming from. Two remarks though. The whole point of compatibilism is not to dodge the “grips of determinism” but to note how free the grip leaves us. And some people do doubt human causal powers: fatalism wasn’t invented by philosophers.
Agreed. I was only referring to Vihvelin in the link you provided, which makes the wrongheaded claim that the normal definition of determinism is wrong (tries to revise that as well). I mispoke when I wrote “no one” (“no one” was meant colloquially – not meant as literally “no one” – there is always “someone” who believes in flat earth theory, etc.). I was only referring the majority of philosophical rational hard determinists / hard incompatibilists that do not also conclude fatalism. But yes, there are indeed fatalists out there. We don’t need compatibilism to denote the problems with fatalism while bypassing other important topics.
.
Vivhelin *agrees* that necessary causality is the wrong definition of determinism, and sufficient causality is the right one. That’s the point of the first logic diagram. The best scientific deterministic theories are bidirectionally deterministic, though, so it seems to me like a mere technicality.
The problem is with her suggesting that a sufficient cause analysis goes against most of those definitions she provides. It does not – as most of those definitions are in line with sufficient causality and do not require necessary causality.
Her point can be seen in the question “So what would happen if A were true at W1? Would B be true, or C or both?” –> it just does not matter for determinism (in most of those definitions she provides) whether we can know from A being true if B is true, C is true, or both are true. The only requirement is that B or C are causally dictated by a past event or events.
That being said, agreed on “bidirectional” being a physics staple anyway.;-)
She doesn’t suggest a sufficient-cause analysis goes against the definitions. She suggests the definitions would be *satisfied* by a necessary-cause physics. Which is a problem (technically).
I don’t see that. It seems to me (at least for the first logic diagram) that she is saying that determinism would be satisfied without necessary-cause (ml-FROM) physics (which is technically true). That fact, however, doesn’t go against most of those definitions of determinism. I could be misinterpreting her on this.
It’s sneaky. In W1 node A is false (empty circle). So from the *actual* events at t1 one *can* infer description at t2, but in physically *possible* world W2 one couldn’t. The argument presupposes a non-Humean understanding of causal laws.
Agreed. She assumes a lack of necessary causality (but again, most versions of determinism do not rely on necessary causality, only sufficient causality- hence my criticism). Assuming there is necessary causality, however, there would be “more than just true or false” to account for to backtrack to B or C (so she’d be over-simplifying for her point).