Jan 252016
 

compatibilist-freewill-machine-ad-300

Above is a fake ad created by me. Just another graphic poking a little harmless fun at some compatibilist definitions of free will.

The idea behind it was to build a “machine” that has most if not all of the qualities that many compatibilists tend to associate with the term free will, yet make the machine obviously entirely forced to do what it does.

So what compatibilist “ingredients” does the above machine in the “fake ad” have? First, it has conscious thoughts and desires that the machine produces. This is often what the compatibilist would refer to as the “willing” aspect. Keep in mind that it matters not that the machine has no control over the causality that leads to it’s own thought.

The causality for this machine is that, unknown to the machine, when it pulls the lever up, it causes a button to be pressed behind the scenes that send an impulse to build a desire to pull the lever down, which gets injected into the brain and forces the next willed event – which is to “want to pull lever down”. Then, based on that want, it actions that desire, which in turns presses another button to make the machine want to “pull lever up”. This machine will in turn pull the lever up and down, each time wanting to do so and actioning that want.

The second part that is key is that the machine has the ability to action that desire. It isn’t being prevented not to. The arm has the ability to lift the lever up or down, and there is no person coercing the machine to do one or the other. It is inacting it’s own will, and it has the “freedom” to do so (in the sense that compatibilists use when they refer to “freedom”).

This machine has the key components of consciousness, wanting, and actioning.

There is a $5.99 plugin that offers another component that some (but not all) compatibilist think important. The plugin causally creates a second order desire that trumps the action of the lever pulling. In other words, it sends the machine a new signal that tells the machine that even though it desires to pull the lever, there are reasons it shouldn’t do so. This creates a brief deliberation process in which the “second order” desire always ends up trumping the desires to pull the lever. This makes the machine stop pulling the lever. So yes, this machine can deliberate and weigh options in which the higher weighted one (the second order desire) wins out.

The fact that the two are weighed gives the compatibilist free will machine just one more “ability” often associated with compatibilist free will. It matters not that for the plugin the second order desire always wins, the point remains. If the plugin pumped different weighted ideas into the brain and sometimes pulling the lever won out, it would make no difference. The output would be based entirely on what had the larger weighting.

I also threw in a rudimentary  but “real” biological brain in a vat, as some compatibilist denote the biological aspect as part of their free will semantic as if such is important. Let’s make sure we have as many compatibilist components possible, even if they are truly irrelevant.

The only component that it doesn’t have that is sometimes attributed to compatibilist semantics is the idea that the ability has “evolved”, but I find that part the most irrelevant of all. It matters not if the ability evolved or is built, if the ability itself is the same. And of course for a human built machine it didn’t evolve (at least in the same sense as the biological sense).

This machine will work with a large majority of compatibilist semantics: this machine has the free will to be who it is, want what it wants, and action what it wants. In that sense it is probably more “free” than most people who want things they can never have.

But look carefully at the machine. Notice how its actions are entirely dictated and it couldn’t have done otherwise. Notice the causal influences that lead up to the thought/desire to pull the lever up or down depending on the pulse that was sent (or not if the second order plugin is used). This machine’s decisions and actions are entirely deterministic.

The hard incompatibilist such as myself would say it is absurd to propose that the machine had free will, BTFWI - paperbackand in turn it would be absurd to blame the machine for its actions. But that is because we define free will to align with the ability most laypersons intuitively feel they possess. For example, most people feel they could have done otherwise, but it’s obvious this machine could not have.

And even if we throw some quantum randomness into the decisions of the machine that can make it do who knows what. Even if we plug in the free libertarian free will plugin that accomplishes this task, the machine has no say over those random variables. Any new decision made was never “up to the machine”.

For the hard incompatibilist such as myself, the definition of free will provided here is of great importance to so many other topics: FREE WILL

And that is the free will neither we have, nor any compatibilist free will machine. 😉

The following two tabs change content below.

'Trick Slattery

'Trick Slattery is the author of Breaking the Free Will Illusion for the Betterment of Humankind. He's an author, philosopher, artist, content creator, and entrepreneur. He has loved and immersed himself in philosophy since he was teenager. It is his first and strongest passion. Throughout the years he has built a philosophy based on analytic logic and critical thinking. Some of the topics he is most interested in are of a controversial variety, but his passion for the topics and their importance drives him to want to express these ideas to others. His other passions include pen and ink line art and digital artwork.

Latest posts by 'Trick Slattery (see all)

  4 Responses to “Compatibilist Free Will Machine – Fake Ad”

Comments (4)
  1. Trick,

    Bravo. Nicely done.

  2. I would order two if it were nondeterministic.

  3. ‘Trick, your article has reminded me of some hilarious moments during my career. I design then build a prototype system. My peers join me in testing the system. We observe its behaviour under test and reach the conclusion that it can’t possibly behave this way, even if it was deliberately designed to attempt to behave in the way that it actually does!

    I think this exemplifies Hanlon’s razor: Never attribute to malice that which is adequately explained by stupidity.

Sorry, the comment form is closed at this time.