Kripke and the tortoise
a battle of thought experiments about knowing what we mean
Prologue: on calculators and consciousness
A few days ago, Philip Goff shared a brief argument for the claim that minds are not material. In essence, he said that the behaviour of any physical system (such as a calculator or a brain) is always compatible with multiple interpretations of what it’s doing, whereas there are determinate facts about what happens inside a mind, so those facts must be sui generis and irreducible.
Goff made his argument using an example inspired by Kripke’s Wittgenstein on Rules and Private Language, but he staked out a thoroughly non-Kripkean position and I saw some confusion in later discussion about Kripke and Wittgenstein. So, I want to respond to that confusion and, along the way, enlist the help of Lewis Carroll (that’s right, of Alice’s Adventures in Wonderland). Let’s go through the looking glass and see what we find!
Act One
Enter Kripke
I want to present Kripke’s argument but not in the way he did. I think Kripke ended up saying some sensible things about meaning, but I also think his discussion contained some confusing rhetoric and a little too much hyperbole. So, I’m going to give you my interpretation of what’s important.
Think about what constitutes your knowledge of how to add two numbers together.
In the past, you’ll have been taught what adding is by being shown examples. “One” apple here and “one” apple there makes “two” apples. A good start. Now abstract it: 1 + 1 = 2. Then you learn the number line and how to count on your fingers. Next, you connect it to geometry and so on.
At some point along the way, it seems like you must have picked up a definition of addition which determines for you how to handle any two numbers, even though you’ve only ever dealt with a finite number of cases. How did that happen? What does that definition look like?
A possible problem with any answer you might give is that all of the examples you’ve encountered in your life so far are compatible with different ways of handling future ones. When you were taught 1 + 1 and all the rest, you were never specifically taught that 68 + 57 = 125, so maybe you shouldn’t derive 125 for that - maybe you should say that 68 + 57 = 5. What counts against that possibility? A natural reaction is “that’s not how addition works” but our whole question is: what do you know that makes you say that?
This might look like a problem of induction - you’re given a finite number of examples and you’re tasked with finding a rule for an infinite range - but you’d be right to be suspicious of that framing. Though you learned addition by referring to examples, you didn’t simply try to generalise from them. You’ll also have been taught that you can use a method, like: for any two numbers, take one as a starting point and count out the other. That method is not defined inductively but it does pass the buck, for now we can ask: what do you mean by ‘count’?
You can bite the bullet and try to define a method for counting (and all the terms that it relies on) but eventually this chain will have to end and you’ll arrive at some term that can’t rely on any other. And when that happens, it seems like you’ll be left with no option but to stipulate what you want to do, and all your methods of counting and adding must grow out of that stipulation.
So, despite the fact that you have a strong feeling that you do addition by following a rule (you’re not just making it up), there’s an indeterminacy lurking at the heart of it - you could do it in different ways and there’s nothing to tell you that one way is correct.
Now, the word ‘indeterminacy’ introduces all sorts of connotations that can blow the conversation apart, so I want to be clear about what it means in practical terms: when you encounter a sum that you haven’t seen before, like 68 + 57, you’re not entitled to say that the answer must be 125 because that’s the only one that adheres to the rule of addition. You instead have to say that the answer should be 125 because that’s how you want to add.
If this is true, it doesn’t mean that “you don’t really know what addition is” or that “there’s no fact about whether you’re adding or doing something else” because these descriptions still assume that there could have been a fact about what addition is and you just haven’t got it, as if this is all a problem of knowledge. Not so. The conclusion we’re driving at is that you decide what addition is by acting, not the other way around. If you decide the nature of addition, you could hardly fail to do it, could you?
And yet… you may be wondering if we’ve made a mistake, for if you can stipulate what addition should be - if first you think “this is how I want to do it” and then you do it - don’t you have access to a definition (the thing that ‘this’ refers to) which you could just as well use as a rule? Then the indeterminacy we just identified wouldn’t hold up. Indeed, we’d actually be saying that addition is determinate, we just want to change the direction of determination - we want to say that humans invent mathematics instead of discovering it. Is that what all this hubbub is about? That we make our own conventions?
Excellent objection. That’s how the situation appears when you accept Kripke’s challenge but the argument hasn’t landed properly. To fix this, we’re going to pay a visit to wonderland.
Act Two
Enter the tortoise and Achilles
Lewis Carroll was a mathematician as well as an author, and in 1895 he published a great little philosophical dialogue which anticipates Wittgenstein’s and Kripke’s worries about rule-following. He wrote it as a conversation between a tortoise and Achilles, developing a paradox in the style of Zeno, but here I’m going to present it in a heavily reduced form (the tortoise and Achilles can watch from upstage).
Imagine you’re evaluating some simple relationships between propositions. Suppose you’ve granted that some proposition, A, is true and also that A → B (A being true implies B is true). Since both A and A → B are true, you’ll surely want to conclude B.
But not so fast! To conclude B, it seems you’d need to rely on a hidden assumption, namely that (A & A → B) → B. You hadn’t said that you were granting this and, if we take it away, maybe nothing follows from A & A → B. They’re just two independent propositions that you happen to have written down. So, let’s say that you grant (A & A → B) → B. Do you get to conclude B now?
Not so fast! To conclude B, you would again seem to need a hidden assumption, namely that (A & A → B & ((A & A → B) → B)) → B. That’s a little hard to read but you don’t need to squint - it just generalises the last step. At each point, we’re taking all the propositions we’ve put on the table so far and we’re writing a rule which says: given everything I know, let me conclude B. But in so doing, we’re onto an infinite regress where each implication rule is defined in terms of another implication rule.
Compared to Kripke’s puzzle, this may strike you as artificial and easily solved. The infinite regress clearly happens because the analysis is mistaken about how implication works. To have entailments between propositions, we shouldn’t be expressing the mechanism of implication as just another proposition. It seems we need to step up a level and stipulate a definition for the logical system as a whole, such that ‘→’ induces the entailments we’re interested in as a matter of course. There ought to be a difference between the statements within a system and the rules for that system.
Notice how this maps to Kripke’s argument: we encountered an apparent infinite regress in the definition of ‘add’, decomposing it into a chain of other terms, and perhaps with a bit more nuance, we could say that our problem was that we were looking for a definition of addition as if it would be a fact of mathematics of the same kind as 1 + 1 = 2. We instead need to stipulate a definition of addition one level up, seeing that we need to specify how mathematical facts should be generated. Splendid.
Except… there’s that word ‘stipulate’ again and we’ve still got a wrinkle in our analysis that looks like it could undermine our claim that there’s an indeterminacy here. Stipulations don’t need to be indeterminate. They can be rules that we’re just happy to accept as premises.
The tortoise wanders over to show us our problem. He says that we should try to write down the rule which stipulates how our logical system would work. So we write down: (P & P → Q) ⊨ Q. This says that, if we’ve got some propositions P and Q, where P is true and P → Q is true, Q is entailed. We’re distinguishing ‘implication’ from ‘entailment’ (and have introduced the ‘⊨’ symbol), to make it clear that this is a metalogical statement, one level up from where we’re using ‘→’. But then the tortoise says: and what do you mean by ‘⊨’?
What we’ve essentially tried to do is reformulate ‘→’ as another symbol on another level as a way of shouting: “accept the damned implication!” But, as a result, we’re just moving the infinite regress around. To define the system fully, we’d now have to have a meta-metalogical statement to define ‘⊨’ and so on. What was originally a regress of propositions is now a regress of levels and we haven’t solved the problem at all.
The same is going to be true of addition: we can’t solve the problem just by saying that we stipulate meta-mathematical conventions. If you’ve read Kripke’s version of the argument, all the crazy stuff to do with skeptics, memory problems and LSD trips is really a brute force way of getting you to recognise that there’s no safety in a definition, no matter where it comes from or where you put it. If we’re stipulating anything at all, it can’t be one of those.
Act Three
Enter Wittgenstein
How do we cut this Gordian knot? Let me put it in a slogan: norms without forms.
The reason we find ourselves in a bind is because, no matter which way we tackle the problem - whether we try to have knowledge by discovery or invention, or we try to build systems of statements and meta-statements - we’re obsessed with formality.
By that, I don’t only mean logic, I mean linguistic forms. Sentences. Definitions. Descriptions. Instructions. Translations. We want to use language to control language. And when this obvious circularity brings us to an impasse, we say: it’s fine, everyone needs axioms, we’ll stipulate some. And then we do it in language! It’s as if we were to stipulate “this false sentence is true.” We contradict ourselves in intention.
To break the spell of forms, we need to turn to norms - practices for grounding meaning in social behaviour. This transition is a delicate one. In denying the power of forms, we’re denying that there can exist any statements that control how we understand something. But this also means that a norm cannot consist of a community agreeing on a definition by convention. Yet we’re surrounded by language that seems to do exactly that. “To add two numbers together, use one as a starting point and count out the other.” What kind of expression is this if not a conventional definition?
Let me leverage a very different kind of example. Suppose I want you to play Beethoven’s Moonlight sonata. If you don’t already know how to play the piano, you learn. You get the sheet music, you start slowly, tripping up on the C# minor key signature. Eventually, you can play all the right notes at a good speed, though you’re robotic and play everything at the same dynamic. You work on your technique and add all sorts of expressivity, though you don’t have a coherent view of the piece - your playing doesn’t suit the period. You undertake a careful study of the harmonic structure and the stylistic intent until you settle on your own view of how you’d like it to sound. You’re a virtuoso now but the piece always sounds a little better in your head. We arrange a recital for you to play in. You make two small mistakes in the last movement but get a big applause.
Clearly, you can play the Moonlight sonata now but, throughout all your rehearsals, there was no moment at which you played it for the first time, so how do we ever reach agreement that you can play it? Immediately, some people will be lured into a crisis about meaning and will say that there’s nothing that can be said about this which is golden and eternal. These are people who are still under the spell of forms - it makes them look for definitions and, when they find none, they say that meaning itself does not exist.
We don’t make the situation any better by appealing to conventions, as that’s transparently hopeless in this case. The range of criteria we could consider is vast: whether it matters if there are mistakes in your playing, how many, what kind; whether a certain kind of interpretation is required, in execution or intention; whether some keyboard instruments are more authentic than others and so on. The possibilities are open-ended and our conversation about them will always be relative to contextual interests.
Yet we reach consensus nonetheless because when we exchange reasons for whether to say that you can play the sonata, we’re not really looking for reasons to say it, we’re looking for reasons to not say it. In trying to justify ourselves, we’re giving a critique which feels out a negative space - we’re implicitly asking: “is there anything that should be different for this to count?” And when we finally feel comfortable saying “no, there’s nothing that should be different,” that doesn’t give us criteria for saying, “this is what playing the sonata consists of,” we just reach a point where we’re happy to stop talking about our language.
When we turn to things like addition or logical implication, we like to think that their simplicity allows us to enter a stricter world of conventional definitions. But that world is always a dream which imagines we can exit language through language. A statement like “to add, take one number and count out the other” is just the same kind of thing as “to play the Moonlight sonata, you’ve got to feel the rhythm in the left hand.” It’s a cue to keep the ship on course, but the course itself is just what’s left over when you’re not going in a wrong direction.
So, to know how to do addition is not to know a rule of mathematics, or a rule of the community, or a rule of your own. It’s for you and a community to share an attitude towards activities you call ‘addition’, such that you’re all happy there’s nothing you would change about them. When we introduce newcomers to addition, or we critique our practices when we think they should change, we use language that looks definitional but which functions as gestures, encouragements and commentary, responding to the open-endedness of our behaviour. Once we’ve given all the reasons we want to give about how to continue, they won’t count as criteria for how to continue, only as reasons to stop giving reasons. What we call ‘definitions’ are really shortcuts to the end of questioning.
Epilogue
Enter Philip Goff
We now have plenty of resources to respond to Goff’s argument about the immateriality of the mind. In light of everything we’ve discussed, we can characterise the argument like this: because we cannot determine by formal analysis whether a calculator or a brain adds or does something else, there is no fact about whether it adds or not, while, introspectively, we know that we’re adding when we think we’re adding, so the mind must not be reducible to the brain.
Perhaps the most important part of a response to this is to keep steady when dealing with abstractions like ‘indeterminacy’ and ‘facts of the matter’, as these are not used consistently by everyone. Very often in philosophy, we ask a question of the form “does x exist?” - perhaps x is meaning, the self, free will, morality and so on - and an answer comes: “x is incoherent, so x cannot exist.” But this makes a mistake: if x is incoherent, nothing can be said about it.
When meaning is described as ‘indeterminate’, some people read this as saying: there could have been determinate meanings but they don’t exist. Language is empty. Meaning is an illusion.
This is not what it says. It says that determinate meaning in a preconceived sense of rules and definitions is incoherent - meaning is ‘indeterminate’ in the sense that it’s not that - but meaning is determinate in another sense, which is that we do reach consensus on how to talk. Consensus is never rigid or final but it is grounds for deciding whether our use of a word is right or wrong.
So, bearing this in mind, we need to challenge Goff’s analysis of the behaviour of calculators and brains in its appeal to forms instead of norms. If we want to determine whether anything is adding, we have to engage in open-ended questioning about whether there are any reasons for not saying so.
Since we have to approach this in negative terms, there is never a single place to look. With the calculator, for example, we could look at its outputs or its program (which we could examine at different levels of granularity), we could look at how the numbers on the screen relate to the program output, or at the graphics on the buttons and the signals the buttons send to the program and so on.
There’s no list of criteria for settling the issue by definition. We can only explore the space of possible justifications until we’re prepared to stop. It’s on us to decide when to stop, so a different community might well reach a different consensus, but so long as we have a consensus of our own, we can deem a calculator to be ‘adding’ by the very same standards we use for ourselves.
Goff would likely say that this changes the subject. I’ve ended up talking about how we use language in practice, but there’s something else going on when we introspect and feel that we’re adding, since that feeling is personal and private, not governed by social norms - there’s something in us which is out of reach for consensus and could not possibly be changed by community practices; we might agree with our community to speak about our phenomenology in new ways (to change our public use of ‘add’ by changing how it relates to our private concepts) but there must be something already fixed within us for ‘addition’ to be about.
The tortoise wanders over to show us our problem. He allows that our public use of ‘addition’ may change in its relation to facts about us and then says that we should come up with a symbol that we can use for our current sense of addition, even if the practices of the community change. So we invent the symbol ‘⨁’. But then the tortoise says: and if the community changes its ways, how will you know when to use ‘⨁’?
Since the answer to this can’t appeal to public language, it usually goes: “I just know what ⨁ is! I can’t say anything else about it.” But this conclusion is ironic, for it rejects both forms and norms as a basis for ‘⨁’ meaning anything. It is the one, true kind of indeterminate meaning, as there is nothing to determine whether a use of ‘⨁’ is correct except whether we choose to say so. It’s the purest form of stipulation - not even a stipulation of a rule or of a practice, just “I am right to say ‘⨁’.”
You might think this is too extreme, for surely what would determine our use of ‘⨁’ is the experiential quality of adding. But if we tried to stipulate, even non-verbally, that this is what ⨁ feels like, we would have all the problems of formal indeterminacy we’ve just explored, as this would start an infinite regress in how we justify to ourselves that we’re naming the right feeling. Alternatively, we can try to avoid this indeterminacy by treating ourselves as a community of one, with our own, private norms, but, just as in a community of many, what we’d end up identifying by ‘⨁’ is not what ⨁ is, only that we’re happy to stop asking whether to say ‘⨁’. Whatever poison you pick, there are no facts about minds that could not also be facts about brains.






What’s the diff between convention and consensus, shared attitude of a community? Soon as I ask that I say to myself, “Does it matter?”
Im mostly on board with this until Act Three. I dont think mentioning norms and practices explains why we agree on novel cases, even without specifically communicating about them beforehand. The permissiveness in the case of the sonata doesnt help is with addition, where we agree there is only one right answer. De facto we learn norms about meaning from something like reinforcement learning, and that can no more/less beat the induction problem than linguistic deliberation can.
>You can bite the bullet and try to define a method for counting but eventually this chain will have to end and you’ll arrive at some term that can’t rely on any other.
I think the problem where a basic terms is defined in so that it creates an exception specifically when were adding 68 + 57, is avoided here by looking for specifically a context-free language of mathematics. For that, we have a good understanding of how an algorithm does it without any explicit represetation of "context-freeness".