-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathintro2.tex
1151 lines (1044 loc) · 64.7 KB
/
intro2.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter*{Introduction}
\addcontentsline{toc}{chapter}{Introduction}
%% PUT ALL THE PROVOCATIVE STUFF IN THE INTRODUCTION
\section*{A new kind of philosophy}
Some people think that philosophy never makes progress. In fact,
professional philosophers might think that more frequently --- and
feel it more acutely --- than anyone else. At the beginning of the
20th century, some philosophers were so deeply troubled that they
decided to cast all previous philosophy on the scrap heap, and to
rebuild from scratch. ``Why shouldn't philosophy be like science?''
they asked. ``Why can't it also make genuine progress?''
Now, you might guess that these philosophers would have located
philosophy's problems in its lack of empirical data and experiments.
One advantage of the empirical sciences is that bad ideas (such as
``leeches cure disease'') can be falsified through experiments.
However, this wasn't the diagnosis of the first philosophers of
science; they didn't see empirical testability as the {\it sine qua
non} of a progressive science. Their guiding light was not the
empirical sciences, but mathematics, and mathematical physics.
The 19th century had been a time of enormous progress in mathematics,
not only in answering of old questions, and in extending applications,
but but also in clarifying and strengthening the foundations of the
discipline. For example, George Boole had clarified the structure of
logical relations between propositions, and Georg Cantor had given a
precise account of the concept of ``infinity'', thereby setting the
stage for the development of the new mathematical theory of sets. The
logician Gottlob Frege had proposed a new kind of symbolic logic that
gave a precise account of all the valid argument forms in mathematics.
And the great German mathematician David Hilbert, building on a rich
tradition of analytic geometry, proposed an overarching axiomatic
method in which all mathematical terminology is ``de-interpreted'' so
that the correctness of proofs is judged on the basis of purely formal
criteria.
For a younger generation of thinkers, there was a stark contrast
between the ever more murky terminology of speculative philosophy, and
the rising standards of clarity and rigor in mathematics. ``What is
the magic that these mathematicians have found?'' asked some
philosophically inclined scientists at the beginning of the 20th
century. ``How is it that mathematicians have a firm grip on concepts
such as `infinity' and `continuous function', while speculative
philosophers continue talking in circles?'' It was time, according to
this new generation, to rethink the methods of philosophy as an
academic discipline.
The first person to propose that philosophy be recreated in the image
of 19th century mathematics was Bertrand Russell. And Russell was not
at all modest in what he thought this new philosophical method could
accomplish. Indeed, Russell cast himself as a direct competitor with
the great speculative philosophers, most notably with Hegel. That is,
Russell thought that, with the aid of the new symbolic logic, he could
describe the fundamental structure of reality more clearly and
accurately than Hegel himself did. Indeed, Russell's ``logical
atomism'' was intended as a replacement for Hegel's monistic idealism.
Russell's grand metaphysical ambitions were cast upon the rocks by his
student Ludwig Wittgenstein. In essence, Wittgenstein's
\textit{Tractatus Logico Philosophicus} was intended to serve as a
\textit{reductio ad absurdum} of the idea that the language of
mathematical logic is suited to mirror the structure of reality in
itself. To the extent that Russell himself accepted Wittgenstein's
rebuke, this first engagement of philosophy and mathematical logic
came to an unhappy end. In order for philosophy to become wedded to
mathematical logic, it took a distinct second movement, this time
involving a renunciation of the ambitions of traditional speculative
metaphysics. This second movement proposed not only a new method of
philosophical inquiry, but also a fundamental reconstrual of its aims.
As mentioned before, the 19th century was a golden age for mathematics
in the broad sense, and that included mathematical physics.
Throughout the century, Newtonian physics has been successfully
extended to describe systems that had not originally been thought to
lie within its scope. For example, prior to the late 19th century,
changes in temperature had been described by the science of
thermodynamics, which describes heat as a sort of continuous substance
that flows from one body to another. But then it was shown that the
predictions of thermodynamics could be reproduced by assuming that
these bodies are made of numerous tiny particles obeying the laws of
Newtonian mechanics. This reduction of thermodynamics to statistical
mechanics led to much philosophical debate over the existence of
unobservable entities, e.g.\ the tiny particles (atoms) whose movement
is supposed to explain macroscopic phenomena such as heat. Leading
scientists, such as Boltzmann, Mach, Planck, and Poincar\'e sometimes
took opposing stances on these questions, and it led to more general
reflection on the nature and scope of scientific knowledge.
These scientists couldn't have predicted what would happen to physics
at the beginning of the 20th century. The years 1905--1915 saw no
fewer than three major upheavals in physics. These upheavals began
with Einstein's publication of his special theory of relativity, and
continued with Bohr's quantum model of the hydrogen atom, and then
Einstein's general theory of relativity. If anything became obvious
through these revolutions, it was that we didn't understand the nature
of science as well as we thought we did. We had believed we
understood how science worked, but people like Einstein and Bohr were
changing the rules of the game. It was high time to reflect on the
nature of the scientific enterprise as a whole.
The new theories in physics also raised further questions,
specifically about the role of mathematics in physical science. All
three of the new theories --- special and general relativity, along
with quantum theory --- used highly abstract mathematical notions, the
likes of which physicists had not used before. Even special
relativity, the most intuitive of the three theories, uses
four-dimensional geometry, and a notion of ``distance'' that takes
both positive and negative values. Things only got worse when, in the
1920s, Heisenberg proposed that the new quantum theory make use of
non-commutative algebras which had no intuitive connection whatsoever
to things happening in the physical world.
The scientists of the early 20th century were decidedly philosophical
in outlook. Indeed, reading the reflections of the young Einstein or
Bohr, one realizes that the distinction between ``scientist'' and
``philosopher'' had not yet been drawn as sharply as it is today.
Nonetheless, despite their philosophical proclivities, Einstein, Bohr,
and the other scientific greats, were not philosophical system
builders, if only because they were too busy publicizing their
theories, and then working for world peace. Thus, the job of ``making
sense of how science works'' was left to some people who we now
consider to be philosophers of science.
If we were to call anybody the first ``philosopher of science'' in the
modern sense of the term, then it should probably be \emph{Moritz
Schlick} (1882--1936). Schlick earned his PhD in physics at Berlin
under the supervision of Max Planck, and thereafter began studying
philosophy. During the 1910s, Schlick became one of the first
philosophical interpreters of Einstein's new theories, and in doing
so, he developed a distinctive view in opposition to Marburg
neo-Kantianism. In 1922, Schlick was appointed chair of {\it
Naturphilosophie} in Vienna, a post that had earlier been held by
Boltzmann and then by Mach.
When Schlick formulated his epistemological theories, he did so in a
conscious attempt to accommodate the newest discoveries in mathematics
and physics. With particular reference to mathematical knowledge,
Schlick followed 19th century mathematicians --- most notably Pasch
and Hilbert --- in saying that mathematical claims are true by
definition, and that the words that occur in the axioms are thereby
implicitly defined. In short, those words have no meaning beyond that
which accrues to them by their role in the axioms.
While Schlick was planting the roots of philosophy of science in
Vienna, the young \emph{Hans Reichenbach} (1891--1953) had found a way
to combine the study of philosophy, physics, and mathematics by moving
around between Berlin, G\"ottingen, and Munich --- where he studied
philosophy with Cassirer, physics with Einstein, Planck, and
Sommerfeld, and mathematics with Hilbert and Noether. He struggled at
first to find a suitable academic post, but eventually Reichenbach was
appointed at Berlin in 1926. It was in Berlin that Reichenbach took
on a student named Carl Hempel (1905--1997), who would later bring
this new philosophical approach to the elite universities in the US.
Hempel's students include several of the major players in 20th century
philosophy of science, such as Adolf Gr\"unbaum, John Earman, and
Larry Sklar. Reichenbach himself eventually relocated to UCLA, where
he had two additional students of no little renown: Wesley Salmon and
Hilary Putnam.
However, back in the 1920s, shortly before he took the post at Berlin,
Reichenbach had another auspicious meeting at a philosophy conference
in Erlangen. Here he met a young man named Rudolf Carnap who, like
Reichenbach, found himself poised at the intersection of philosophy,
physics, and mathematics. Reichenbach introduced Carnap to his friend
Schlick, the latter of whom took an eager interest in Carnap's
ambition to develop a ``scientific philosophy.'' A couple of short
years later, Carnap was appointed assistant professor of philosophy in
Vienna --- and so began the marriage between mathematical logic and
philosophy of science.
\section*{Carnap}
Having been a student of Frege's in Jena, Rudolf Carnap (1891--1970)
was an early adopter of the new logical methods. He set to work
immediately trying to employ these methods in the service of a new
style of philosophical inquiry. His first major work --- {\it Der
Logische Aufbau der Welt} \citeyearpar{carnap1928} --- attempted the
ultra-ambitious project of constructing all scientific concepts out of
primitive (fundamental) concepts. What is especially notable for our
purposes was the notion of {\it construction} that Carnap employed,
for it was a nearby relative to the notion of {\it logical
construction} that Russell had employed, and which descends from the
mathematician's idea that one kind of mathematical object (e.g.\ real
numbers) can be constructed from another kind of mathematical object
(e.g.\ natural numbers). What's also interesting is that Carnap takes
over the idea of {\it explication}, which arose in mathematical
contexts, e.g.\ when one says that a function $f$ is ``continuous''
just in case for each $\epsilon >0$, there is a $\delta >0$ such that
\dots
When assessing philosophical developments, such as these, that are so
closely tied to developments in the exact sciences, we should keep in
mind that ideas that are now clear to us might have been quite opaque
to our philosophical forebears. For example, these days we know quite
clearly what it means to say that a theory $T$ is complete. But to
someone like Carnap in the 1920s, the notion of completeness was vague
and hazy, and he struggled to integrate it into his philosophical
thoughts. We should keep this point in mind as we look toward the
next stage of Carnap's development, where he attempted a purely
``syntactic'' analysis of the concepts of science.
In the late 1920s, the student Kurt G\"odel (1906--1978) joined in the
discussions of the Vienna circle, and Carnap later credited G\"odel's
influence for turning his interest to questions about the language of
science. G\"odel gave the first proof of the completeness of the
predicate calculus in his doctoral dissertation (1929), and two years
later, he obtained his famous incompleteness theorem, which shows that
there is some truth of arithmetic that cannot be derived from the
first-order Peano axioms.
In proving incompleteness, G\"odel's technique was
``meta-mathematical'', i.e.\ he employed a theory $M$ {\it about} the
first-order theory $T$ of arithmetic. Moreover, this metatheory $M$
employed purely syntactic concepts, e.g.\ the length of a string of
symbols, or the number of left parentheses in a string, or being the
last formula in a valid proof that begins from the axioms of
arithmetic. This sort of approach proved to be fascinating for
Carnap, in particular, because it transformed questions that seemed
hopelessly vague and ``philosophical'' into questions that were
tractable --- and indeed tractable by means of the very methods that
scientists themselves employed. In short, G\"odel's approach
indicated the possibility of an exact science of the exact sciences.
And yet, G\"odel's inquiry was restricted to one little corner of the
exact sciences: arithmetic. Carnap's ambitions went far beyond
elementary mathematics; he aspired to apply these new methods to the
entire range of scientific theories, and especially the new theories
of physics. Nonetheless, Carnap quickly realized that he faced
additional problems beyond those faced by the metamathematician, for
scientific theories --- unlike their mathematical cousins --- purport
to say something {\it contingently true}, i.e.\ something that could
have been otherwise. Hence, the logical approach to philosophy of
science isn't finished when one has analyzed a theory $T$ qua
mathematical object; one must also say something about how $T$ latches
on to empirical reality.
Carnap's first attempts in this direction were a bit clumsy, as he
himself recognized. In the 1920s and 1930s, philosophers of science
were just learning the basics of formal logic. It would take another
forty years until ``model theory'' was a well-established discipline;
and the development of mathematical logic continues today (as we hope
to make clear in this book). However, when mathematical logic was
still in its infancy, philosophers' often tried the ``most obvious''
solution to their problems --- not realizing that it couldn't stand up
to scrutiny. Consider, for example, Carnap's attempt to specify the
{\it empirical content} of a theory $T$. Carnap proposes that the
vocabulary $\Sigma$ in which a theory $T$ is formulated must include
an empirical subvocabulary $O\subseteq \Sigma $, in which case the
empirical content of $T$ can be identified with the set $T|_O$ of
consequences of $T$ restricted to the vocabulary $O$. Similarly, in
attempting to cash out the notion of ``reduction'' of one theory to
another, Carnap initially said that the concepts of the reduced theory
needed to be explicitly defined in terms of the concepts of the
reducing theory --- not realizing that he was thereby committing to a
far more narrow notion of reduction than was being used in the
sciences.
In Carnap's various works, however, we do find the beginnings of an
approach that is still relevant today. Carnap takes a ``language''
and a ``theory'' to be objects of his inquiries, and he notes
explicitly that there are choices to be made along the way. So, for
example, the classical mathematician chooses a certain language, and
then adopts certain transformation rules. In contrast, the
intuitionistic mathematician chooses a different language, and adopts
different transformation rules. Thus, Carnap allows himself to ascend
semantically --- to look at scientific theories from the outside, as
it were. From this vantage point, he is no longer asking the
``internal questions'' that the theorist herself is asking. He is not
asking, for example, whether there is a greatest prime number.
Instead, the philosopher of science is raising ``external questions'',
i.e.\ questions about the theory $T$, and especially those questions
that have precise syntactic formulations. For example, Carnap
proposes that the notion of a sentence's being ``analytic relative to
$T$'' is an external notion that we, metatheorists, use to describe
the structure of $T$.
The 20th century concern with ``analytic truth'' didn't arise in the
seminar rooms of philosophy departments --- or at least, not in
philosophy departments like the ones of today. In fact, this concern
began rather with 19th century geometers, faced with two parallel
developments: (1) the discovery of non-euclidean geometries, and (2)
the need to raise the level of rigor in mathematical arguments.
Together, these two developments led mathematical language to be
disconnected from the physical world. In other words, one key outcome
of the development of modern mathematics was the {\it
de-interpretation} of mathematical terms such as ``number'' or
``line''. These terms were replaced by symbols that bore no intuitive
connection to external reality.
It was this de-interpretation of mathematical terms that gave rise to
the idea that analytic truth is {\it truth by postulation}, the very
idea that was so troubling to Russell, and then to Quine. But in the
middle of the 19th century, the move that Russell called ``theft''
enabled mathematicians to proceed with their investigations in absence
of the fear that they lacked insight into the meanings of words such
as ``line'' or ``continuous function''. In their view, it didn't
matter what words you used, so long as you clearly explained the rules
that governed their use. Accordingly, for leading mathematicians such
as Hilbert, mathematical terms such as ``line'' mean nothing more nor
less than what axioms say of them; and it's simply impossible to write
down false mathematical postulates. There is no external standard
against which to measure the truth of these postulates.
It's against this backdrop that Carnap developed his notion of
analytic truth in a framework; and that Quine later developed his
powerful critique of the analytic-synthetic distinction. However,
Carnap and Quine were men of their time, and their thoughts operated
at the level of abstraction that science had reached in the 1930s.
The notion of logical metatheory was still in its infancy, and it had
hardly dawned on logicians that ``frameworks'' or ``theories'' could
themselves be treated as objects of investigation.
%% anti-metaphysics
%% internal external
\section*{Quine}
If one was a philosophy student in the late 20th century, then one
learned that Quine ``demolished'' logical positivism. In fact, the
errors of positivism were used as classroom lessons in how not to
commit the most obvious philosophical blunders. How silly to state a
view that, if true, entails that one cannot justifiably believe it!
During his years as an undergraduate student at Oberlin, \emph{Willard
van Orman Quine} (1908--2000) had become entranced with Russell's
mathematical logic. After getting his PhD from Harvard in 1932, Quine
made a beeline for Vienna, just at the time that Carnap was setting
his ``logic of science'' program into motion. Quine quickly became
Carnap's strongest critic. As the story is often told, Quine was
single-handedly responsible for the demise of Carnap's program, and of
logical positivism more generally.
Of course, Quine was massively influential in 20th century philosophy
--- not only for the views he held, but also via the methods he used
for arriving at those views. In short, the Quinean methodology looks
something like this:
\begin{enumerate}
\item One cites some theorem $\phi$ in logical metatheory.
\item One argues that $\phi$ has certain philosophical consequences,
e.g.\ makes a certain view untenable.
\end{enumerate}
Several of Quine's arguments follow this pattern, even if he doesn't
always explicitly mention the relevant theorem $\phi$ from logical
metatheory. One case where he is explicit is in his 1940 paper with
Nelson Goodman, where he ``proves'' that every synthetic truth can be
converted to an analytic truth. Whatever one may think of Quine's
later arguments against analyticity, there is no doubt, historically
speaking, that this metatheoretical result played a role in Quine's
arriving at the conclusion that there is no analytic-synthetic
distinction. And it would only be reasonable to think that {\it our}
stance on the analytic-synthetic distinction should be responsive to
what this mathematical result can be supposed to show.
As the story is typically told, Quine's ``Two dogmas of empiricism''
dealt the death blow to logical positivism. However, Carnap presented
Quine with a moving target, as his views continued to develop. In
``Empiricism, semantics, and ontology'' \citeyearpar{carnap-eso},
Carnap further developed the notion of a {\it framework}, which bears
striking resemblances both to the notion of a {\it scientific theory},
and hence to the notion of a theory $T$ in first-order logic. Here
Carnap distinguishes two types of questions --- the questions that are
{\it internal} to the framework, and the questions that are {\it
external} to the framework. The internal questions are those that
can be posed in the language of the framework, and for which the
framework can (in theory) provide an answer. In contrast, the
external questions are those that we ask {\it about} a framework.
Carnap's abstract idea can be illustrated by simple examples from
first-order logic. If we write down a vocabulary $\Sigma$ for a
first-order language, and a theory $T$ in this vocabulary, then a
typical internal question might be something like, ``Does anything
satisfy the predicate $P(x)$?''. In contrast, a typical external
question might be, ``How many predicate symbols are there in
$\Sigma$''? Thus, the internal/external distinction corresponds
roughly to the older distinction between object- and meta-language
that frames Carnap's discussion in {\it Logische Syntax der Sprache}
\citeyearpar{carnap1934}.
The philosophical point of the internal/external distinction was
supposed to be that one's answers to external questions are not held
to the same standards as one's answers to internal questions. A
framework includes rules, and an internal question should be answered
in accordance with these rules. So, to take one of Carnap's favorite
examples, ``Are there numbers?'' can naturally construed as an
external question, since no mathematician is actively investigating
that question. This question is {\it not} up for grabs in
mathematical science --- instead, it's a presupposition of
mathematical science. In contrast, ``Is there a greatest prime
number?'' is internal to mathematical practice, i.e.\ it is a question
to which mathematics aspires to give an answer.
Surely most of us can grasp the intuition that Carnap is trying to
develop here. The external questions must be answered in order to set
up the game of science; the internal questions are answered in the
process of playing the game of science. But Carnap wants to push this
idea beyond the intuitive level --- he wants to make it a cornerstone
of his theory of knowledge. Thus, Carnap says that we may single out
a certain special class of predicates --- the so-called {\it
Allw\"orter} --- to label a domain of inquiry. For example, the
number theorist uses the word ``number'' to pick out her domain of
inquiry --- she doesn't investigate whether something falls under the
predicate ``$x$ is a number''. In contrast, a number theorist might
investigate whether there are numbers $x,y,z$ such that $x^3+y^3=z^3$;
and she simply doesn't consider whether some other things, which are
not themselves numbers, satisfy this relation.
\cite{quine1951a,quine1960} takes up the attack against Carnap's
internal/external distinction. While Quine's attack has several
distinct maneuvers, his invocation of hard logical facts typically
goes unquestioned. In particular, Quine appeals to the supposedly
hard logical fact that every theory in a language that has several
distinct quantifiers (i.e.\ many-sorted logic) is equivalent to a
theory in a language with a single unrestricted
quantifier. \begin{quote} It is evident that the question whether
there are numbers will be a category question only with respect to
languages which appropriate a separate style of variables for the
exclusive purpose of referring to numbers. If our language refers
to numbers through variables that also take classes other than
numbers as values, then the question whether there are numbers
becomes a subclass question. \ldots Even the question whether there
are classes, or whether there are physical objects becomes a
subclass question if our language uses a single style of variables
to range over both sorts of entities. Whether the statement that
there are physical objects and the statement that there are black
swans should be put on the same side of the dichotomy, or on
opposite sides, comes to depend upon the rather trivial
consideration of whether we use one style of variables or two for
physical objects and classes. \citep[p 208]{quine1976} \end{quote}
Thus, suggests Quine, there is a metatheoretical result --- that a
many-sorted theory is equivalent to a single-sorted theory --- that
destroys Carnap's attempt to distinguish between {\it Allw\"orter} and
other predicates in our theories.
We won't weigh in on this issue here, in our introduction. It would
be premature to do so, because the entire point of this book is to lay
out the mathematical facts in a clear fashion, so that the reader can
judge the philosophical claims for herself.
In ``Two dogmas of empiricism'', Quine argues that it makes no sense
to talk about a statement's admitting of confirming or infirming
(i.e.\ disconfirming) instances, at least when taken in isolation.
Just a decade later, \emph{Hilary Putnam}, in his paper ``What
theories are not'' \citep{putnam1962} applied Quine's idea to entire
scientific theories. Putnam, student of the ur-positivist
Reichenbach, now turns the positivists' primary weapon against them,
to undercut the very distinctions that were so central to their
program. In this case, Putnam argues that the set $T|_O$ of
``observation sentences'' does not accurately represent a theory $T$'s
observational content. Indeed, he argued that a scientific theory
cannot properly be said to have observational content; and hence that
the warrant for believing it cannot flow from the bottom (the
empirical part) to the top (the theoretical part). The move here is
paradigmatic Putnam: a little bit of mathematical logic deftly invoked
to draw a radical philosophical conclusion. This isn't the last time
that we will see Putnam wield mathematical logic in the service of a
far-reaching philosophical claim.
\section*{The semantic turn}
In the early 1930s, the Vienna circle made contact with the group of
logicians working in Warsaw, and in particular with \emph{Alfred
Tarski} (1901--1983). As far as 20th century analytic philosophy is
concerned, Tarski's greatest influence has been through his bequest of
\emph{logical semantics}, along with his explications of the notions
of \emph{structure} and \emph{truth in a structure}. Indeed, in the
second half of the 20th century, analytic philosophy has been deeply
intertwined with logical semantics, and ideas from model theory have
played a central role in debates in metaphysics, epistemology,
philosophy of science, and philosophy of mathematics.
The promise of a purely syntactic metatheory for mathematics fell into
question already in the 1930s when Kurt G{\"o}del proved the
incompleteness of Peano arithmetic. At the time, a new generation of
logicians realized that not all interesting questions about theories
could be answered merely by looking at theories ``in themselves'', and
without relation to other mathematical objects. Instead, they
claimed, the interesting questions about theories include questions
about how they might relate to antecedently understood mathematical
objects, such as the universe of sets. Thus was born the discipline
of logical semantics. The arrival of this new approach to metatheory
was was heralded by Alfred Tarski's famous definitions of ``truth in a
structure'' and ``model of a theory''. Thus, after Tarski, to
understand a theory $T$, we have more than the theory {\it qua}
syntactic object, we also have a veritable universe $\mathrm{Mod}(T)$
of models of $T$.
\emph{Bas van Fraassen} was one of the earliest adopters of logical
semantics as a tool for philosophy of science, and he effectively
marshaled it in developing an alternative to the dominant outlook of
scientific realism. Van Fraassen ceded Putnam's argument that the
empirical content of a theory cannot be isolated syntactically. And
then, in good philosophical fashion, he transformed Putnam's {\it
modus ponens} into a {\it modus tollens}: the problem is not with
empirical content, per se, but with the attempt to explicate is
syntactically. Indeed, van Fraassen claimed that one needs the tools
of logical semantics in order to make sense of the notion of empirical
content; and equipped with this new explication of empirical content,
empiricism can be defended against scientific realism. Thus, both the
joust and the parry were carried on within an explicitly meta-logical
framework.
Since the 1970s, philosophical discussions of science have been
profoundly influenced by this little debate about the place of syntax
and semantics. Prior to the criticisms --- by Putnam, van Fraassen,
et al. --- of the ``syntactic view of theories'', philosophical
discussions of science frequently drew upon new results in
mathematical logic. As was pointed out by van Fraassen particularly,
these discussions frequently degenerated, as philosophers found
themselves hung up on seemingly trivial questions, e.g.\ whether the
observable consequences of a recursively axiomatized theory are also
recursively axiomatizable. Part of the shift from syntactic to
semantic methods was supposed to be a shift toward a more faithful
construal of science in practice. In other words, philosophers were
supposed to start asking the questions that arise in the practice of
science, rather than the questions that were suggested by an obsessive
attachment to mathematical logic.
The move away from logical syntax has had some healthy consequences in
terms of philosophers engaging more closely with actual scientific
theories. It is probably not a coincidence that since the fall of the
syntactic view of theories, philosophers of science have turned their
attention to specific theories in physics, biology, chemistry, etc..
As was correctly pointed by van Fraassen, Suppes, and others,
scientists themselves don't demand first-order axiomatizations of
these theories --- and so it would do violence to those theories to
try to encode them in first-order logic. Thus, the demise of the
syntactic view allowed philosophers to freely draw upon the resources
of set-theoretic structures, such as topological spaces, Riemannian
manifolds, Hilbert spaces, $C^*$-algebras, etc..
Nonetheless, the results of the semantic turn have not been uniformly
positive. For one, philosophy of science has seen a decline in
standards of rigor, with the unfortunate consequence that debating
parties more often than not talk past each other. For example, two
philosophers of science might take up a debate about whether
isomorphic models represent the same or different possibilities.
However, these two philosophers of science may not have a common
notion of ``model'' or of ``isomorphism''. In fact, many philosophers
of science couldn't even tell you a precise formal explication of the
word ``isomorphism'' --- even though the rely on the notion in many of
their arguments. Instead, their arguments rely on some vague sense
that isomorphisms preserve structure, and an even more vague sense of
what structure is.
In this book, we'll see many cases in point, where a technical term
from science (physics, math, or logic) has made its way into
philosophical discussion, but has then lost touch with its technical
moorings. The result is almost always that philosophers add to the
stock of confusion, rather than reducing it. How unfortunate it is
that philosophy of science has fallen into this state, given the role
we could play as prophets of clarity and logical rigor. One notable
instance where philosophers of science could help increase clarity is
the notion of {\it theoretical equivalence}. Scientists, and
especially physicists, frequently employ the notion of two theories
being equivalent. Their judgments about equivalence are not merely
important for their personal attitudes towards their theories, but
also for determining their actions --- e.g.\ will they search for a
crucial experiment to determine whether $T_1$ or $T_2$ is true? For
example, students of classical mechanics are frequently told that the
Lagrangian and Hamiltonian frameworks are equivalent, and on that
basis, they are discouraged from trying to choose between them.
Now, it's not that philosophers don't talk about such issues.
However, in my experience, philosophers tend to bring to bear
terminology that is alien to science, and which sheds no further light
on the problems. For example, if an analytic philosopher is asked,
``when do two sentences $\phi$ and $\psi$ mean the same thing?'' then
he is likely to say something like, ``if they pick out the same
proposition.'' Here the word ``proposition'' is alien to the
physicist; and what's more, it doesn't help to solve real-life
problems of synonymy. Similarly, if an analytic philosopher is asked,
``when do two theories $T_1$ and $T_2$ say the same thing?'' then he
might say something like, ``if they are true in the same possible
worlds.'' This answer may conjure a picture in the philosopher's
head, but it won't conjure any such picture in a physicist's head ---
and even if it did, it wouldn't help decide controversial cases. We
want to know whether Lagrangian mechanics is equivalent to Hamiltonian
mechanics, and whether Heisenberg's matrix mechanics is equivalent to
Schr\"odinger's wave mechanics. The problem here is that space of
possible worlds (if it exists) cannot be surveyed easily; and the task
of comparing the subset of worlds in which $T_1$ is true with the
subset of worlds in which $T_2$ is true is hardly tractible. Thus,
the analytic philosopher's talk about ``being true in the same
possible worlds'' doesn't amount to an {\it explication} of the
concept of equivalence. An explication, in the Carnapian sense,
should supply clear guidlines for how to use a concept.
Now, don't get me wrong. I am not calling for a Quinean ban on
propositions, possible worlds, or any of the other concepts that
analytic philosophers have found so interesting. I only want to point
out that these concepts are descendants, or cousins, of similar
concepts that are used in the exact sciences. Thus, it's important
that analytic philosophers --- to the extent that they want to
understand, and/or clarify science --- learn to tie their words back
down into their scientific context. For example, philosophers'
possible worlds are the descendant of the logician's ``models of a
theory'', and the mathematician's ``solutions of a differential
equation'', and the physicist's ``points in state space.'' Thus, it's
fine to talk about possible worlds, but it would be advisable to align
our usage of the concept with the way its used in the sciences.
As we saw before, Carnap had self-imposed the constraint that a
philosophical explication of a concept must be {\it syntactic}. So,
for example, to talk about ``observation sentences,'' one must
construct a corresponding predicate in the language of syntactic
metalogic --- a language whose primitive concepts are things like
``predicate symbol'' and ``binary connective''. Carnap took a swing
at defining such predicates, and Quine, Putnam, and friends found his
explications to be inadequate. There are many directions that one
could go from here --- and one of these directions remains largely
unexplored. First, one can do as Quine and Putnam themselves did:
stick with logical syntax, and change one's philosophical views.
Second, one can do as van Fraassen did: move to logical semantics, and
stick with Carnap's philosophical views. (To be fair, van Fraassen's
philosophical views are very different than Carnap's --- I only mean
here to indicate that there are certain central respects in which van
Fraassen's philosophical views are closer to Carnap's than to
Quine's.) The third option is to say: perhaps logical syntax had not
yet reached a fully mature stage in 1950, and perhaps new developments
will make it more feasible to carry out syntactic explications of
philosophical concepts. That third option is one of the objectives of
this book, i.e.\ to raise syntactic analysis to a higher level of
nuance and sophistication.
\section*{Model theoretic madness}
By the 1970s, scientific realism was firmly entrenched as the dominant
view in philosophy of science. Most the main players in the field ---
Boyd, Churchland, Kitcher, Lewis, Salmon, Sellars, etc.\ --- had taken
up the realist cause. Then, with a radical about-face, Putnam again
took up the tools of mathematical logic, this time to argue for the
incoherence of realism. In his famous ``model-theoretic argument'',
Putnam argued that logical semantics --- in particular, the
L\"owenheim-Sk{\o}lem theorem --- implies that any consistent theory
is true. In effect, then, Putnam proposed a return to a more liberal
account of theoretical equivalence, indeed, something even more
liberal than the logical positivists' notion of empirical equivalence.
Indeed, on the most plausible interpretation of Putnam's conclusion,
it entails that any two consistent theories are equivalent to each
other.
Whatever you might think of Putnam's radical claim, there is no doubt
that it stimulated some interesting responses. In particular,
Putnam's claim prompted the arch-realist David Lewis to clarify the
role that {\it natural properties} play in his metaphysical system.
According to Lewis, the defect in Putnam's argument is the assumption
that a predicate $P$ can be assigned to any subset of objects in the
actual world. This assumption is mistaken, says Lewis, because not
every random collection of things corresponds to some natural class;
and we should only consider interpretations in which predicates that
occur in $T$ are assigned to natural classes of objects in the actual
world. Even if $T$ is consistent, there may be no such interpretation
relative to which $T$ is actually true.
There are mixed views on whether Lewis' response to Putnam is
effective. However, for our purposes, the important point is that the
upshot of Lewis' response would be to move in the direction of a more
conservative account of theoretical equivalence. And now the question
is whether the notion of theoretical equivalence that Lewis is
proposing goes too far in the other direction. On one interpretation
of Lewis, his claim is that two theories $T$ and $T'$ are equivalent
only if they share the same ``primitive notions''. If we apply that
claim literally to first-order theories, then we might think that
theories $T$ and $T'$ are equivalent only if they are written with the
same symbols. However, this condition wouldn't even allow
notationally variant theories to be equivalent.
While Lewis was articulating the realist stance, Putnam was digging up
more arguments for a liberal and inclusive criterion of theoretical
equivalence. Here he drew on his extensive mathematical knowledge to
find examples of theories that mathematicians call equivalent, but
which metaphysical realists would call inequivalent. One of Putnam's
favorite examples here was axiomatic Euclidean geometry, which some
mathematicians formulate with points as primitives, and other
mathematicians formulate with lines as primitives --- but they never
argue that one formulation is more correct than the other. Thus,
Putnam challenges the scientific credentials of realism by giving
examples of theories that scientists declare to be equivalent, but
which metaphysical realists would declare to be inequivalent.
At the time when Putnam put forward these examples, analytic
philosophy was unfortunately growing more distant from its logical and
mathematical origins. What this meant, in practice, is that while
Putnam's examples were extensively discussed, the discussion never
reached a high level of logical precision. For example, nobody
clearly explained how the word ``equivalence'' was being used.
These exciting, and yet imprecise, discussions continued with
reference to a second example that Putnam had given. In this second
example, Putnam asks how many things are on the following line:
\begin{quote} {\large \Ladiesroom \Gentsroom } \end{quote} There are
two schools of metaphysicians who give different answers to this
question. According to the mereological nihilists, there are exactly
two things on the line: a man and a woman. According to the
mereological universalists, there are three things on the line: a man,
a woman, and a couple. Without any warrant besides his own intuition,
Putnam claims that this debate amongst the metaphysicians is a
``purely verbal dispute'', and neither party is more correct than the
other.
Again, what's important for us here it that Putnam's claim amounts to
a proposal to liberalize the standards of theoretical equivalence. By
engaging in this dispute, metaphysicians have implicitly adopted a
rather conservative standard of equivalence --- where it matters
whether you think that a ``couple'' is really something more beyond
the people who constitute it. Putnam urges us to adopt a more liberal
criterion of theoretical equivalence, according to which it simply
doesn't matter whether we say that the couple ``really exists'', or
whether we don't.
\section*{From reduction to supervenience}
The logical positivists --- Schlick, Carnap, Neurath, etc. --- aspired
to uphold the highest standards of scientific rationality. Most of
them believed that commitment to scientific rationality demands a
commitment to physicalism, i.e.\ the thesis that physical science is
the final arbiter on claims of ontology. In short, they said that we
ought to believe that something exists only if physics licenses that
belief.
Of course, we don't much mind rejecting claims about angels, demons,
witches, and fairies. But what are we supposed to do with the sorts
of statements that people make in the ordinary course of life ---
about each other, and about themselves? For example, if I say,
``S{\o}ren is in pain,'' then I seem to be committed to the existence
of some object denoted by ``S{\o}ren'', that has some property ``being
in pain.'' How can physical science license such a claim, when it
doesn't speak of an object S{\o}ren or the property of being in pain?
The general thesis of physicalism, and the particular thesis that a
person is his body, were not 20th century novelties. However, it was
a 20th century novelty to attempt to explicate these theses using the
tools of symbolic logic. To successfully explicate this concept would
transform it from a vague ideological stance to a sharp scientific
hypothesis. (There is no suggestion here that the hypothesis would be
empirically verifiable --- merely that it would be clear enough to be
vulnerable to counterargument.)
For example, suppose that $r(x)$ denotes the property of being in
pain. Then it would be natural for the physicalist to propose either
(1) that statements using $r(x)$ are actually erroneous, or (2) that
there is some predicate $\phi (x)$ in the language of fundamental
physics such that $\forall x(r(x)\lra \phi (x))$. In other words, if
statements using $r(x)$ are legitimate, then $r(x)$ must actually pick
out some underlying physical property $\phi (x)$.
The physicalist will want to clarify what he means by saying that
$\forall x(r(x)\lra \phi (x))$, for even a Cartesian dualist could
grant that this sentence is contingently true. That is, a Cartesian
dualist might say that there is a purely physical description
$\phi (x)$ which happens, as a matter of contingent fact, to pick out
exactly those things that are in pain. The reductionist, in contrast,
wants to say more. He wants to say that there is a more thick
connection between pain experiences and happenings in the physical
world. At the very least, a reductionist would say that
\[ T \: \vdash \: r(x)\lra \phi (x) ,\] where $T$ is our most
fundamental theory of the physical world. That is, to the extent that
ordinary language ascriptions are correct, they can be translated into
true statements of fundamental physics.
This sort of linguistic reductionism seems to have been the favored
view among early 20th century analytic philosophers --- or, at least
among the more scientifically inclined of them. Certainly,
reductionism had vocal proponents, such as U.T. Place and Herbert
Feigl. Nonetheless, by the third quarter of the 20th century, this
view had fallen out of fashion. In fact, some of the leading lights
in analytic philosophy --- such as Putnam and Fodor --- had arguments
which were taken to demonstrate the utter implausibility of the
reductionist point of view. Nonetheless, what had not fallen out of
favor among analytic philosophers was the naturalist stance that had
found its precise explication in the reductionist thesis. Thus,
analytic philosophers found themselves on the hunt for a new, more
plausible way to express their naturalistic sentiments.
There was another movement afoot in analytic philosophy --- a movement
away from the formal mode, back toward the material mode, i.e.\ from a
syntactic point of view, to a semantic point of view. What this
movement entailed in practice was a shift from syntactic explications
of concepts to semantic explications of concepts. Thus, it is only
natural that having discarded the syntactic explication of mind-body
reduction, analytic philosophers would cast about for a semantic
explication of the idea. Only, in this case, the very word
``reduction'' had so many negative associations that a new word was
needed. To this end, analytic philosophers co-opted the word
``supervenience.'' Thus Donald Davidson:
\begin{quote} Mental characteristics are in some sense dependent, or
supervenient, on physical characteristics. Such supervenience might
be taken to mean that there cannot be two events alike in all
physical respects but differing in some mental respect, or that an
object cannot alter in some mental respect without altering in some
physical respect. \citep{davidson} \end{quote} Davidson's prose
definition of supervenience is so clear that it is begging for
formalization. Indeed, as we'll later see, when the notion of
supervenience is formalized, then it is none other than the model
theorist's notion of implicit definability.
It must have seemed to the 1970s philosophers that significant
progress had been made in moving from the thin syntactic concept of
reduction to the thick semantic concept of supervenience. Indeed, by
the 1980s, the concept of supervenience had begun to play a major role
in several parts of analytic philosophy. However, with the benefit of
hindsight, we ought to be suspicious if we are told that an
implausible philosophical position can be converted into a plausible
one merely by shifting from a syntactic to a semantic explication of
the relevant notions. In this case, there is a worry that the concept
of supervenience is nothing but a reformulation, in semantic terms, of
the notion of reducibility. As we will discuss in Section
\ref{go-beth}, if supervenience is cashed out as the notion of
implicit definability, then \emph{Beth's theorem} shows that
supervenience is equivalent to reducibility.
Why did philosophers decide that mind-brain reductionism was
implausible? We won't stop here to review the arguments, as
interesting as they are, since that has been done in many other places
\citep[see][]{bickle-sep}. We are interested rather in claims (see
e.g.\ \cite{bickle}) that the arguments against reduction are only
effective against syntactic accounts thereof --- and that semantics
permits a superior account of reduction that is immune to these
objections.
Throughout this book, we argue for a fundamental duality between
logical syntax and semantics. To the extent that this duality holds,
it is mistaken to think that semantic accounts of concepts are more
intrinsic, or that they allow us to transcend the human reliance on
representations, or that they provide a bridge to the ``world'' side
of the mind-world divide.
To the contrary, logical semantics is \dots wait for it \dots just
more mathematics. As such, while semantics can be used to represent
things in the world, including people and their practice of making
claims about the world, its means of representation are no different
than those of any other part of mathematics. Hence, every problem and
puzzle and confusion that arises in logical syntax --- most notably,
the problem of language-dependence --- will rear its ugly head again
in logical semantics. Thus, for example, if scientific antirealism
falls apart when examined under a syntactic microscope, then it will
also fall apart when examined under a semantic microscope. Similarly,
if mind-body reductionism isn't plausible when explicated
syntactically, then it's not going to help to explicate it
semantically.
What I am saying here should not be taken as a blanket criticism of
attempts to explicate concepts semantically. In fact, I'll be the
first to grant that model theory is not only a beautiful mathematical
theory, but is also particularly useful for philosophical thinking.
However, we should be suspicious of any claims that a philosophical
thesis (e.g.\ physicalism, antirealism, etc.) is untenable when
explicated syntactically, but becomes tenable when explicated
semantically. We should also be suspicious of any claims that
semantic methods are any less prone to creating pseudoproblems than
syntactic methods.
\section*{Realism and equivalence}
As we have seen, many of these debates in 20th century philosophy
ultimately turn on the question of how one theory is related to
another. For example, the debate about the mind-body relation can be
framed as a question about how our folk theory of mind is related to
the theories of the brain sciences, and ultimately to the theories of
physics.
If we step up a level of abstraction, then even the most general
divisions in 20th century philosophy have to do with views on the
relations of theories. Among the logical positivists, the predominant
view was a sort of antirealism, certainly about metaphysical claims,
but also about the theoretical claims of science. Not surprisingly,
the preferred view of theoretical equivalence among the logical
positivists was empirical equivalence: two theories are equivalent
just in case they make the same predictions. That notion of
equivalence is quite liberal in that it equates theories that
intuitively seem to be inequivalent.
If we leap forward to the end of the 20th century, then the outlook
had changed radically. Here we find analytic metaphysicians engaged
in debates about mereological nihilism versus universalism, or about
presentism versus eternalism. We also find philosophers of physics
engaged in debates about Bohmian mechanics versus Everettian
interpretations of quantum mechanics, or about substantivalism versus
relationalism about spacetime. The interesting point here is that
there obviously had been a radical change in the regnant standard of
theoretical equivalence in the philosophical community. Only seventy
years prior, these debates would have been considered pseudo-debates,
for they attempt to choose between theories that are empirically
equivalent. In short, the philosophical community as a whole had
shifted from a more liberal to a more conservative standard of
theoretical equivalence.
There have been, however, various defections from the consensus view
on theoretical equivalence. The most notable example here is the
Hilary Putnam of the 1970s. At this time, almost all of Putnam's
efforts were devoted to liberalizing standards of theoretical
equivalence. We can see this not only in his model-theoretic
argument, but also in the numerous examples he brings forward in
attempt to prime our intuitions. Putnam put forward the example of
different formulations of Euclidean geometry, and also the famous
example of ``Carnap and the Polish logician'', which has since become
a key example of the quantifier variance debate.
One benefit of the formal methods developed in this book is a sort of
taxonomy of views in 20th century philosophy. The realist tendency is
characterized by the adoption of more conservative standards of
theoretical equivalence; and the antirealist tendency is characterized
by the adoption of more liberal standards of theoretical equivalence.
Accordingly, we shouldn't think of ``realism versus antirealism'' on
the model of American politics, with its binary division between
Republicans and Democrats. Indeed, philosophical opinions on the
realism-antirealism question lie on a continuum, corresponding to a
continuum of views on theoretical equivalence. (In fact, views on
theoretical equivalence really form a multi-dimensional continuum; I'm
merely using the one-dimensional language for intuition's sake.) Most
of us will find ourselves with a view of theoretical equivalence that
is toward the middle of the extremes, and many of the philosophical
questions we consider are questions about whether to move --- if ever
so slightly --- in one direction or the other.
In this book, we will develop three moderate views of theoretical
equivalence. The first two views say that theories are equivalent
just in case they are intertranslatable --- only they operate with
slightly different notions of ``translation''. The first, and more
conservative, view treats quantifier statements as an invariant, so
that a good translation must preserve them intact. (We also show that
this first notion of intertranslatability corresponds to ``having a
common definitional extension''. See Theorems \ref{cde-it} and
\ref{it-cde}.) The second, and more liberal, view allows greater
freedom in translating one language's quantifier statements into a
complex of the other language's quantifier statements. (We also show
that this second notion of intertranslatability corresponds to
``having a common Morita extension''. See Theorems \ref{redux} and
\ref{it-mor}.) The third view of equivalence we consider is the most
liberal, and is motivated not by linguistic considerations, but by
scientific practice. In particular, scientists seem to treat theories
as equivalent if they can ``do the same things with them''. We will
explicate this notion of what a scientific theory can do in terms of
its ``category of models''. We then suggest that two theories are
equivalent in this sense if their categories of models are equivalent
in the precise, category-theoretic sense.
% Thus, when Carnap enunciates his principle of tolerance --- which
% roughly says that there are no objective standards for comparing
% frameworks --- he doesn't realize that he is thereby choosing one
% particular way of characterizing the space of theories. Similarly,
% when Quine says that he doesn't understand what ``$L$-truth in a
% framework'' means, he is implicitly adopting an incredibly liberal
% standard of equivalence between theories. In particular, if you don't
% recognize $L$-truth in a framework, then you also can't tell the
% difference between legitimate and illegitimate translations between
% frameworks. You'll end up saying that any framework is as good as any
% other.
% Carnap and Quine, for all of their deep insights, didn't clearly
% realize what game they were playing --- or didn't clearly admit that
% they were playing it. The game here is to take concepts hat we use
% all the time --- such as the concept of equivalent theories --- in
% making decisions or in judging others' decisions, and to sharpen them.
% Quine arrived at the view that he didn't need these troublesome
% concepts, probably because they have overtly normative connotations,
% and he had no place in his desert landscape for normative concepts.
% But in the end, nobody --- not Quine, nor you, nor I --- can do
% without such concepts, and they will find their way in the back door.
% To borrow one of Quine's own favorite words, these quasi-normative
% concepts are {\it indispensable} for life and action: it's impossible
% to be an {\it agent} unless you have concepts such as justification,
% assertability, sameness of meaning, equivalence, etc..
\section*{Summary and prospectus}
The following seven chapters try to accomplish two things at once: to
introduce some formal techniques, and to use these techniques to gain
philosophical insight. Most of the philosophical discussions are
interspersed between technical results, but there is one concluding
chapter that summarizes the major philosophical themes. We include
here a chart of some of the philosophical issues that arise in the
course of these chapters. The left column states a technical result,
the middle column states the related philosophical issue, and the
right column gives the location (section number) where the discussion
can be found. To be fair, I don't mean to say that the philosophers
mentioned on the right explicitly endorse the argument from the
metalogical result to the philosophical conclusion. In some cases
they do; but in other cases, the philosopher seems rather to {\it
presuppose} the metalogical result, since it was thought to be
common knowledge.
\begin{figure}[H]
% \rotatebox{90}{%
% \begin{minipage}{\textheight}%
% your text
\begin{tabular}{l | l | l}
logic & philosophy & location \\
\hline \hline
translate into empty theory & analytic-synthetic distinction (Quine)
& \textbf{\ref{qgood}} \\