Pages

Saturday, December 26, 2009

The History of Categorical Logic

Through the paralyzing paradoxes of Professor Polaro I have learned that The History of Categorical Logic by Jean-Pierre Marquis and Gonzalo E. Reyes is available online for free. So far I just took a quick look, but it seems like a good read. Here.

Monday, December 21, 2009

Nominalist Neologicism

My visit to Bristol is over, I'm back home (i.e. in Poland) for a few weeks. I used some of the spare time to finish drafting the first paper that I worked on with Oystein Linnebo last semester (stay tuned for another one).

The paper is titled Nominalist Neologicism and contains the core of the modal-iterative approach to abstraction principles that I've been advocating over the last year or so. It's based on a talk I gave in Frankfurt (Trends in the Philosophy of Mathematics), so the style is a bit relaxed.

If you plan to take a look at it, please let me know in a comment below this post, so that I know I should expect feedback from you. The deadline is Jan 15, so if you have any comments, please share them with me by Jan 10.

Monday, December 7, 2009

deRosset's stuff on modalities

I see Louis deRosset has posted a few nice pieces related to modalities on his website. These include "Possible worlds I: Modal Realism", "Possible Worlds II: Nonreductive Theories of Possible Worlds" and his dissertation "Modal Primitivism: A study in the metaphysics of Necessity and Possibility". Here.

Monday, November 30, 2009

Philosophical Tales

Recently my favorite method of procrastination involved reading Philosophical Tales: Being an Alternative History Revealing the Characters, the Plots, and the Hidden Scenes That Make Up the True story of Philosophy. Even if not always charitable and not always historically adequate, it's quite amusing. Below, some fragments that I find entertaining (the selection is slightly random, though):
Thomas Aquinas was very overweight, suffered from dropsy, and had one large eye and one small eye which made him look lopsided. As a child he was silent most of the time and, when he did speak, it was often unrelated to the conversation. So, he decided to become a philosopher-monk. And, as such, he was very successful.

After he [Aquinas] drove away the temptress, two angels came to him and fastened a chastity belt around his waist.” Or so at least embellishes our other theological expert at Trinity Communications on the Internet, along with advice to readers to “Buy or fashion your own chastity belt, easy to make from braided yarn or thin, soft rope.” (Adding that “St. Joseph chastity belts are available at some Catholic shops,” which Aquinas would not have approved of, being against shops and trading generally.) But at least there is agreement on Aquinas’s good character, albeit it still remains a challenge for people who think that sex is that bad to work out how to continue existing once the present batch has died out....
In the afternoon, Kant would take a long walk along the river, accompanied by his servant, Lampe, carrying an umbrella in case it rained. Kant’s rule that everyone must be treated as an end in themselves and never merely as a ‘means’ to an end (“there can be nothing more dreadful than that the actions of a man should be subject to the will of another”) evidently did not apply to servants carrying umbrellas
Even in bed, the rules had to be followed: Kant had a system for rolling himself up in his sheets so that they fi tted tightly around him. Kant, it will be noted, slept for less than seven hours. He wrote a little booklet about health matters, warning against the dangers of too much sleep. He explained that as each person had only a certain amount of sleep in them, if they used it all up by lying in bed, they WOULD DIE EARLY. (My parents should have told me that ...)
In the Critique of Practical Reason (1786) Kant’s thought leaves the physical universe behind to find a proof for the existence of heaven and the afterlife. He points out that since justice is the good flourishing and the wicked being punished, and that this does not happen on Earth, as we can see by looking around us, then it must take place “in the next world.” This is sublime reasoning. And so to the less than fully appreciated Kantian treatise on the beautiful and the sublime. Night is sublime, day is beautiful. The sea is sublime, the land is beautiful, men are sublime, women are beautiful – and so on. Lots of professors wrote treatises like that at the time, it was almost compulsory.
...the best known is what he calls the categorical imperative:

Act only according to a maxim by which you can at the same time will that it shall become a general law.

...when Kant’s version appears in the Metaphysic of Morals (1785), the imperative is also offered to decide all moral issues. Curiously, though, it seems to collapse at the most easy tests. For example, it allows things that surely should be banned, while outlawing things that don’t seem to matter very much. A rule, for instance, that all children under 5 who disturb philosophers should be beaten with a stick and have their tongues cut out is approved by the ‘rule’ since it is universalizable, but borrowing is forbidden, as if everyone borrowed, it would lead to a run on the bank.
Pompous Footnote

1 Prior to Kant, as Bertrand Russell also notes, philosophers were gentlemen, addressing an audience of amateurs in the language of the everyday. After Kant, philosophy became a dialogue (indeed, often a monologue), conducted in technical language and obscure terms.
Montaigne constantly referred to himself as a way of both ridiculing and excusing his views. Des Cartes uses the same device to distance himself from anticipated criticism, and also to create the dramatic story of the author’s ‘enlightenment’ after some six days reflecting on the nature of the world in a warm oven room.
The famous words cogito ergo sum (which render themselves so elegantly in English as “I think, therefore I am”) never appear in the original version of the Meditations, only in a later and indeed rather casual translation. The actual words used are better translated as: “let the Demon deceive me as much as he may, he will never bring it about that I am nothing so long as I think I am something. So, after considering everything very thoroughly, I must conclude that this proposition, I am, I exist, is necessarily true, every time that I say it, or conceive it in my mind.”(1)

Pompous Footnote
1 Ego sum, ego existo, quoties a me profertur, vel mente concipitur, necessario esse verum is the original Latin text of 1641, for purists. The French version of the principle in the Discourses is superficially nearer to “I think, therefore I am,” being “Je pense, donc je suis,” but an accurate translation of this is not “I think, therefore I am,” but “I am thinking, therefore I exist.” Anyway, the ‘cogito’ does not refer to this text but to the argument in the Meditations. So that’s clear.
Bertrand Arthur William Russell, Third Earl of somewhere or other, son of a Victorian prime minister, and Professor of Philosophy at Trinity College, Cambridge, is still considered there, if not much anywhere else, as “profoundly infl uential in the development of philosophy in the twentieth century.” His special expertise is said to have been in the area of philosophical logic; indeed, he is credited with having coined the term, although as the words have long currency individually, and the activity preceded him by 2,000 years, it is hard to see how his arrangement can count as a novelty. Nevertheless, says Nicholas Griffin, writing in the Routledge Encyclopedia of Philosophy, he was indisputably responsible for a number of “important logical innovations,” prime amongst which was a way to “reparse sentences continuing the phrase ‘so-and-so’ into a form in which the phrase did not appear.” Such achievements deserve further examination.
As a philosopher, Russell sometimes speaks absolute nonsense. Russell seems to have been aware of this, hence his “impish grin” whilst offering increasingly ludicrous examples. Not so his heirs. They issue their dull arrangements with a seriousness born of a serene lack of self-knowledge. Fortunately aside from his logic, Russell did other things. The same is not true of his followers.
Whilst there at Cambridge, Wittgenstein became an institution within an institution, celebrated both for his unorthodox personal style and for his revolutionary approach to teaching. Refusing to lecture but offering only to hold seminars, his ascetic office had few books, equipped instead with the famous deckchair. Those who attended his seminars became his ‘disciples’, and showed their commitment by dressing the same way – tweed jackets, fl annel trousers, no ties. (The clothes, like the philosophy, were not for girls . . .) After each session, he would invite selected confidants to join him at ‘the flicks’, where he would sit in the middle of the front row (nearest the screen) munching on a pork pie. As for Cambridge’s official social gatherings, Wittgenstein declined to attend the ‘dinners’ of the university, although he did agree to participate in the ‘Moral Science Club’ from time to time, including one infamous evening when, to murmurs of approval from his disciples, he demanded of Karl Popper that he provide an example of a ‘moral rule’, gesticulating with a poker for emphasis. Popper supposedly said, “Not threatening visiting speakers with pokers,” and Wittgenstein threw the poker down and stormed out (followed by disciples).
... today the official hagiography neglects some facts. Wittgenstein did give away ‘control’ of his inherited millions, but only to his sisters, and so it was that during World War II, even as the Nazi project was at its most clear and most appalling, he was still able to arrange that a large chunk of the Wittgenstein family fortune – not, say, three ingots of gold (as we all might send) but three tons of the stuff – was made available to the Nazi war effort. In return, the family received official ‘non-Jewish’ status.
Yet, despite favoring one absolute authority, Hobbes dismantled the claims of kings to divine favor, and for doing this (amongst other reasons) he was considered by many of his contemporaries to be, if not actually an atheist, certainly a dangerous heretic. After the Great Plague of 1666, when 60,000 Londoners died, followed by the Great Fire straight afterwards, a parliamentary committee was set up to investigate whether his writings might have brought the two disasters on the realm. As a result of its findings, he was forbidden to write any more books about matters relating to “human conduct” and so had to publish his work abroad instead.
A report in the Journal des Savants of March 4, 1686, records that one young lady had refused “a perfectly eligible suitor” because “he had been unable, within a given time, to produce any new idea about squaring the circle.”
No one knows now exactly what he was accused of, but one of his early biographers, Colerus, describes how Spinoza, relaxed by smoking a pipe, or when he wanted to “rest his mind” rather longer, looked for some spiders which had gotten into a fight with one another, or (failing that) he put flies into a spider’s web, “and then watched the battle with so much enjoyment that he sometimes burst out laughing.” Such diversions there were before there was telly.

Saturday, November 28, 2009

Miscellanea Logica: new series

Through Philos-L I've learned that the new series of Miscellanea Logica is out - for a while I though the enterprise was pretty much dead, but the new series is up and running, chief-edited by Jaroslav Peregrin, a cool logician, known also to some as the - brilliant - logician - who - made - us - make - unintended - loops - on - an - otherwise - marvelous - hike - in - Czech - mountains - in 2007 - nevertheless ("hey, haven't we passed by that tree-cutting machine some time ago?" - "No, there's plenty of those around here" - "all of them with a blue blouse inside and this sticker?") ... anyway, I see that the new series is available online for free! HERE! Check it out. :)

Thursday, November 19, 2009

Managing pdfs with Mendeley

Recently, I've been playing around with Mendeley, a pdf file manager. It does have some very neat features: it allows one to catalogue their pdf files in one database, it keeps a copy of the catalogue (and files, up to 500 MB) online, it inludes an internal pdf browser and provides nice search & filtering options, it allows one to share groups of pdfs with up to 10 other people; it is also possible to comment on pdfs and share those comments as well. Another nice feature is that the package includes a file manager which (if you want) can organize copies of imported files, so that you end up with one organized folder with all your pdfs and positions in your catalogue are linked to them. You can also decide to generate a bibtex file (or bibtex files) as you go, if you like to use LaTeX. When you add files, you can automatically do a google scholar search for file details, it works pretty well. Overall, I think the authors have done a pretty good job. I decided to import all my pdfs into the database. The whole thing, however, is in its beta stage and there are some minor issues - here are some remarks:

  • There is no explicit "work offline" option, and if you change the library and restart the program, it connects automatically with the server and uploads the changes. Sometimes, I'm connected with internet only by a wireless stick, and I don't want my library to automatically synchronize with the server while adding pdfs and editing pdf intro.
  • The program crashes once in a while (in my case, it's more like 3-5 times a day). Luckily, no data loss occurs.
  • The bibtex file that the program generates is kinda weird, you have to take a look at it and correct it by hand. File links in Bibtex don't work when you open the file with JabRef. In the original database it's difficult to make a distinction between those capitalizations that are to be preserved in Bibtex. If you imported your files from a bibtex database, in the new database most likely the entries will get new keys. Still, it's easier to correct the entry and copy it to your "real" bibtex file than to create a new one, so I rather enjoy the bibtex-related features.
  • The internal pdf browser isn't too elaborate - I wouldn't mind having at least "go to page", "go to next page", "go to previous page" buttons. Also, it would be nice if the browser supported bookmarks.
  • The file organizer options, even though they include organizing files into folders by journal, author or year, don't allow you to organize files by collection.

I do think, however, that overall this is a pretty cool stuff and I'm sure, given some time, it will become even better.

Saturday, November 7, 2009

What we don't need to save ordinary conditionals from

It’s been a while since I posted anything. Mostly this is because life has been pretty hectic lately. In September we spent a few weeks in Gdańsk, but now we’re moving every few weeks between various places in the UK, visiting different universities and trying to get some research done meanwhile. This semester I’m mostly based in Bristol as a British Academy Visiting Fellow, working with Oystein Linnebo on the dynamic approach to abstraction principles, and doing some directed reading (on groundedness with Hannes Leitgeb and on axiomatic theories of truth with Leon Horsten). These days, however, we’re hanging out in Scotland, currently visiting Arche Research Centre in St. Andrews, taking off for a few other places tomorrow.

I gave a talk here on Tuesday about nominalistic approaches to neologicism, and decided to stick around for the Arche/CSMN graduate conference. Today, I managed to catch a talk by Ernest Lepore followed by an interesting talk about counterfactuals by Daniel Berntson (with a commentary by Guðmundur Andri Hjálmarsson). Daniel’s talk was titled Saving Ordinary Counterfactuals and was devoted to the problems that quantum indeterminacy (or related phenomena) are supposed to raise for our intuitions about ordinary counterfactuals. The whole thing was very clear and quite interesting. I do have one minor worry, though - I don’t think the problem that Daniel is trying to address exists... Counterfactually: if there were such a problem, Daniel’s approach would be a neat way to approach it. But let’s start from the beginning...

The problem

Intuitively we accept the counterfactual:

(1) If I were to throw a champagne glass off the top of Empire State Building, it would break.

Supposedly, quantum mechanics informs us also that:

(2) There is some chance that a glass thrown off the top of the Empire State Building will quantum tunnel to the moon without breaking.

If indeterminism is true and (2) expresses objective probability, (2) seems to entail (3):

(3) If I were to throw a champagne glass of the top of the Empire State Building, it might safely quantum tunnel to the moon.

This entails:

(4) If I were to throw a champagne glass off the Empire State Building, it might not break.

Now, Daniel suggests that "(4) puts pressure on us to give up (1)" and that there is an "inescapable clash" in the infelicitous assertion (5):

(5) If I were to throw a champagne glass off the Empire State building, it would break; and furthermore, it might not break.

The strategy

In order to save the truth of (1) within the Lewis-Stalnaker approach, Daniel suggests replacing:

A>B iff all of the closest A-worlds are B-worlds.

with

A>B iff the vast majority of the closest A-worlds are B-worlds.

The underlying idea now would be that (1) is made true because most of the closest worlds where the glass is dropped are worlds where it is broken, whereas (4) emphasizes that not all closest possible worlds are worlds where the glass is broken.

Say we put aside the issue of how we are to count possible worlds and they ratios if there are infinitely many of those. There still are some problems that come along with this solution. Most prominently, agglomeration (A>B, A>C hence A>B&C) and transitivity (A>B, A^B>C hence A>C) fail. To fix these issues, Daniel introduces the notion of being almost true, and says that certain claims, even though they aren’t strictly speaking true on this semantics, are still almost true, like when we have a counterfactual which doesn’t preserve probability ratio, but whose consequent is only slightly less probable than the antecedent. There are some bells and whistles to play around with here, but this should be enough for the set-up.

The worry

First, observe that a necessary (but not obviously sufficient) condition for thinking that (5) is a problem is the acceptance of (1) and (4). Strictly speaking, so far Daniel has shown how to preserve the truth of (1), but didn’t say explicitly how to make sense of (4).

In fact, Daniel introduces might-conditionals by saying:

A >m> B iff ~(A>~B)

That is, a might conditional A >m> B is supposed to come out true iff it is not the case that ~B is true in the vast majority of the closest A-worlds.

Alas, this reading of might-conditionals doesn’t support the truth of (4), because given that all the relevant worlds where the glass is broken are worlds where it is not the case that it tunnels safely to the moon, (4) still comes out false if it is to be read as a might-conditional.

Second, it’s not clear where the clash really is. I would certainly be worried if I had intuitive reasons to believe:

(5’) If I were to throw a champagne glass off the Empire State building, it would break; and furthermore, it wouldn’t break.

But so far, I don’t. (5) certainly doesn’t entail (5’). Given that (4) shouldn’t be constructed as a might-counterfactual if its truth is to be preserved, what job exactly is "might" doing in (4) and (5)?

Well, I’m inclined to say that that are at least two ways to accept (1), (4) and (5) even if we play along with non-probabilistic Lewis-Stalnaker semantics.

Story 1. (1) says that in every closest possible world, where I drop the glass, it’s broken, whereas (4) says that there are still some accessible but less similar worlds, where even though the glass is dropped, it’s not broken.

Story 2 (1) says that in every closest possible world, where I drop the glass, it’s broken, and (4) says that it is possible that the glass is dropped and not broken.

Personally, I prefer story 2, because it assigns less content to "might", and because (5) as it is, when read along the reading suggested in story 2, entails that the world where the glass is broken is not the closest one anyway.

I don’t even have the intuitions that (5) displays any sort of clash to start with. Imagine I say:

If I were to join you for conference drinks tonight, I would be hungover the next day. Well, in fact, it’s highly unlikely that I would decide to drink only water, in which case I might feel good the next day even if I go. So if I were to join you for conference drinks tonight, I might feel good the next day, but I think I wouldn’t.

This doesn't seem to contain any contradiction whatsoever.

Interestingly, we did a poll and around half of the audience thought that (5) was problematic, and around half that it wasn’t.

Saturday, September 19, 2009

The "Modal argument" paper is forthcoming

A paper we wrote with Agnieszka Rostalska (I very roughly outlined an early version here) is forthcoming in Philo. A rather final draft of the paper is available online here. The paper is devoted to the clarification and criticism of Swinburne's modal argument for the existence of the soul. Before I paste the abstract and acknowledgments below, one more remark.

When giving this paper at various places, one sort of reactions encountered came from people with good background in logic, but no previous experience with philosophy whatsoever. The reaction boils down to a rather blind stare and comments like "who cares about arguments for the existence of the soul?" or "Why is anyone doing this stuff?". The answers are simple. "Philosophers." to the first question. "Because it's more interesting than using complex mathematical tools to solve problems that only two to three people in the world care about." to the second one.

I prefer to use slightly less elaborate mathematical machinery to deal with philosophically motivated issues than to get into very complex and hermetic issues in, say, inaccessible set theory or computer science. This doesn't mean they aren't interesting. I just find philosophical problems more entertaining and important. And I think it is, in a sense, the responsibility of a philosopher and a logician to spend some time looking at what philosophical arguments are around about claims people care about and what can be said about their correctness, instead of locking themselves in the ivory tower of elaborate and detached purely mathematical problems. But again, it's a matter of choice.

Abstract

Richard Swinburne (Swinburne and Shoemaker 1984; Swinburne 1986) argues that human beings currently alive have non-bodily immaterial parts called souls. In his main argument in support of this conclusion (modal argument), roughly speaking, from the assumption that it is logically possible that a human being survives the destruction of their body and a few additional premises, he infers the actual existence of souls. After a brief presentation of the argument we describe the main known objection to it, called the substitution objection (SO for short), which is raised by Alston and Smythe (1994), Zimmerman (1991) and Stump and Kretzmann (1996). We then explain Swinburne's response to it (1996). This constitutes a background for the discussion that follows. First, we formalize Swinburne's argument in a quantified propositional modal language so that it is logically valid and contains no tacit assumptions, clearing up some notational issues as we go. Having done that, we explain why we find Swinburne's response unsatisfactory. Next, we indicate that even though SO is quite compelling (albeit for a slightly different reason than the one given previously in the literature), a weakening of one of the premises yields a valid argument for the same conclusion and yet immune to SO. Even this version of the argument, we argue, is epistemically circular.

Acknowledgments

We would like to express our gratitude to all the people who discussed these issues with us and commented on earlier versions of this paper. We are grateful to participants of the events where the paper has been presented: Workshop & Young Researcher's Day in Logic, Philosophy and History of Science in Brussels, 2008, Jeffrey Ketland's Omega-seminar in Edinburgh, 2008, and Formal Methods in the Epistemology of Religion in Leuven, 2009. The main ideas of this paper originated after a number of discussions about philosophy of religion and mind with Professor Jack MacIntosh (Calgary). Comments provided by Professor Richard Swinburne (Oxford), who was in the audience when this paper was presented in Leuven in June 2009, were also very helpful, and it was interesting to learn that Professor Swinburne agrees with all our main points, apart from our final assessment of the modified argument. It was Lara Buchak (Berkeley) who observed that our version of the argument developed in response to SO results from a weakening of one of the premises. We also owe gratitude to Paul Draper for his invaluable editorial comments.

Saturday, September 12, 2009

Trends in Philosophy of Mathematics (day 3, talk 1)

On the third day the schedule was a bit more complicated, we had to choose between one of two parallel sessions. The choice was difficult, so I will be unable to comment on some of really interesting talks that I was unable to attend. If I don’t talk about a certain talk it’s because I was unable to make it to it because the alternative talk was more related to what I’m working on. For now, the first talk of the day.

Assadian: Crispin Wright and his Hero

Wright, defending the epistemic accessibility of prima facie impredicative Hume’s Principle tells a story of a fictional character (named Hero) who initially knows second-order logic and possesses a bunch of sortal concepts referring to concrete objects, but doesn’t understand the concept of number. Wright then argues that the Hero can process in stages in order to gain the understanding of the concept of natural numbers.

Stage 1 - The Hero introduces HP for the initial domain that he possesses a grasp of.

Stage 2 - The Hero now understands without circularity the truth conditions of Nx:Fx = Nx:Gx, where neither F nor G contain further occurrences of further numerical terms. The Hero also knows that Nx:Fx=s is false for any term s referring to an object present at Stage 1. In this sense, he seems able to solve the Caesar Problem for he comes to accept:

(NE) no object whose identity is grounded in anything else than HP can be identical to a number.

Stage 3 - The Hero moves on to understanding truth conditions of identity statements of terms containing embedded occurrences of numerical operators.

Assadian takes issue with Wright’s account of how the Hero learns NE, which is of key importance in solving CP. NE seems to hinge on the possession of a complete characterization of natural numbers (=identity conditions for them), and this is not something that the Hero can have at stage 2. For it seems that to understand the claim that the characterization of numbers doesn’t go beyond its identity conditions dictated by HP, one has to be able to grasp those identity conditions already.On the other hand, understanding of complex terms containing embedded numerical operators seems to require that NE be already known, if CP is to be solved. Moreover, if NE is not available at stage 2, the Hero is unable to solve CP at that stage.

A slightly different neologicist approach to the problem is to assume that different sorts of objects in general have different criteria of identity - once one postulates the existence of maximal categories of that sort and ads some fairly convincing assumptions about them, two theorems can be proven. The first says that for any category C for any two objects that fall under that category, they are identical iff they satisfy identity conditions corresponding to that category. The other one says that no object belongs to two different categories. It is needed if we are to exclude the possibility that both numbers and persons constitute a single larger category with its specific identity conditions.

Assadian argues that even though if those theorems are present, CP is solved, it’s quite implausible that those theorems are available to the Hero at stage 2, because theorem 2 already says something about the whole category of all numbers and their identity conditions. On the other hand, if those theorems are to be introduced only at stage 3, it is unclear why the Hero would be able to solve CP already in stange 2 to start with.

Although I generally agree that non-iterative approaches to abstraction principles have been so far unable to solve CP (among the iterative ones there’s Linnebo’s and mine, and I think mine can handle CP and it is quite unclear whether Linnebo’s does - but this is a whole different story), I really would have to see the proofs in detail - what would have to be checked is whether (i) the assumptions used to prove theorems 1 and 2 are convincing, (ii) theorems 1 and 2 follow from those assumptions, (iii) theorems 1 and 2 really allow to solve CP (i.e. to prove negations of mixed identity statements, or something to that effect), and (iv) whether no undesired consequences follow from the same assumptions. But the stuff seems interesting.

Qualms about the (non-)circularity of NE aside, what I’m rather worried about is the justification of the framework in which NE even makes sense. I mean, I have pretty hard time understanding the idea of objects such that there is nothing else to learn about them apart from their rather coarse-grained identity conditions. I am perfectly fine with coarse-grained or relative identity claims, or true fake identity between fake singular terms, but the idea that there really are objects such that the only way we can learn anything about them is through abstraction principles seems suspicious. Of course, it is an attempt to deal with epistemic challenges to Platonism about mathematics, but I don’t think a blunt answer of the sort "How can we know something about numbers? Well, we learn something about them through abstraction principles and there’s nothing else to learn" is satisfactory. I would need a more elaborate and convincing metaphysical story which would convince me to accept the existence of such things, and which would explain why those objects should enjoy this particular status.

Saturday, September 5, 2009

The adaptive logics book has moved

Due to server issues, the book I mentioned before has moved. Here. I also fixed the original link.

Thursday, September 3, 2009

Live from Trends in Logic VII (day 2)

Today we had four quite exciting talks. The first one, given by Oystein Linnebo (Bristol) was devoted to A Partial Defense of Frege's Basic Law V. Oystein started off with the intuitions that there is some pressure to accept Frege's BLV (which says that extensions of two concepts are identical iff exactly the same objects fall under that concept). After criticizing the limitation-of-size approach to restricted versions of the comprehension principle, he went modal-and-iterative about BLV. That is, BLV was used to capture how new sets are formed at new stages using the objects already existing in previous stages, and modal operators were thrown in to express the intuition that we're talking about the possible ways our set-formation process can go. This gives a fairly intuitive criterion for a plurality determining a set: it has to have the same elements across possible worlds. Proof-theoretically, once you take S4.2 as the underlying modal logic, throw in some trans-world extensionality principles for pluralities and sets and introduce the potential plural collapse ("it is necessary that for any xx it is possible that there is a y such that y is the set of xx's"), you can get (a reinterpretation of) Zermelo set theory minus infinity and foundation.

The second talk, by Leon Horsten, was devoted to the relation between numbers and counting systems. Leon defended and described the view dubbed computational structuralism: it's kinda like structuralism, but you take arithmetic to be about the structure of arithmetical notational systems. The basic idea is that if one has a recursively introduced notational systems (so that the symbol denoting the successor is computable), and the addition function is also computable, the system is isomorphic to the intended omega-sequences.

Michael Resnik and Stewart Shapiro both talked about qualms that arise around identity conditions for structures and their elements (or positions in them). Resnik, roughly, was arguing that in certain contexts identity claims (and questions about identity) of certain structures doesn't make sense, whereas Shapiro was rather inclined to say that it's not as much the identity questions that are misled, but rather that certain terms may seem and behave like singular terms, despite referring indeterminately to many objects.

Wednesday, September 2, 2009

Live from Trends in Logic VII

It's the first day of Trends in Logic VII, aka Trends in the Philosophy of Mathematics. So far, we're past an opening, an opening leture by Ryszard Wójcicki, and a splendid conference dinner.

Ryszard Wójcicki, an excellent "hardcore" logician known for his work on consequence operations and Polish-style meta-theory of propositional calculi, has recently decided to think about more philosophical issues. He was talking about Two sources of mathematical truth. The main gist was that the key "source" of mathematical truth was "conceptual realities" (the other source being empirical domains). Alas, I didn't quite get what being a source of truth is, how conceptual realities are supposed to be different from mathematical structures, what their ontological status is, and why they're supposed to exist. Having said that, it was interesting to hear a real "hardcore" researcher say what he thinks about the philosophical status of his own field.

My general impression is that if a "hardcore" scientist of any specific sort suddenly starts to philosophize, it's bound to be slightly weird stuff from the philosopher's perspective (it's not as bad as a philosopher trying to do science, though). What slightly surprised me was that this also holds for logicians. On the other hand, I do think that one of the problems that analytic philosophy in Poland is facing is that there are many excellent logicians doing highly technical stuff but having no philosophical interests or well developed intuitions, and there are many philosophers with highly developed intuitions, but with almost no grasp of logic or attention to arguments and details whatsoever.

Wednesday, August 26, 2009

A rant about "deductive"

or
Don't diss the logician

I’m on my way back from The Second Conference on Concept Types and Frames in Language, Cognition and Science in Dusseldorf. It was a nice conference that gathered linguists, cognitivists, philosophers of science and logicians interested in the functional approach to concepts.

One of the things that surprised me was that both experienced cognitivists (like Paul Thagard) and younger researchers still stick to the distinction between inductive and deductive types of reasoning and attach that much importance to it. Interestingly, “deductive” in their use has a pejorative content and the term is sometimes used condescendingly to emphasize that whatever it is that logicians do is boring and useless and that pretty much the only source of insight and real knowledge are “inductive inferences” taking place in “the real brain”. So, here’s a short rant about this sort of attitude (Frederik is reading over my shoulder and tossing in his remarks).

To start with, I don’t think I know a logician alive who still uses the word “deductive” in any serious ahistorical context. This is because the notion is so worn out that different people associate it with many different things. Instead, more specific terms are used that separately capture different things that you might mean when you say “deductive”.

Roughly, a consequence operation is, for instance, often simply thought of as a set of pairs of sets of sentences. It is called structural if it’s closed under substitution. That’s one thing that you might have in mind: deductive means defined in terms of rules (and maybe axioms) which essentially make no distinction between formulas of the same syntactic form. Another way you can think about these things is to require that a deductive consequence should be simply truth-preserving (vaguely: it’s impossible that the consequence is false when the premises are true). This interpretation is not syntactic, but rather model-theoretic. A truth-preserving consequence doesn’t have to be structural and a structural consequence doesn’t have to be truth-preserving. Another sense you might associate with being deductive is being both structural and truth-preserving (in which case, you still get a multitude of consequence operations, depending on what language and model theory you pick, and what you take to belong to your logical vocabulary). Yet another interpretation you can take is to say that something is a deductive consequence of a given set of premises if it follows from them by classical logic – this notion is sometimes used by those cognitivists who think that logic is classical logic. Although this consequence is structural, whether it’s truth-preserving when it comes to natural language is a matter of what you think about the correctness of certain natural language inferences. For instance, you might be a relevantist – in which case you’re inclined to say that the classical logic allows you to infer too much. Yet another notion simply requires a deductive consequence to satisfy Tarski’s conditions, or some of them, or some of them and some other conditions of a similar type. Yet another idea is to make no reference to a formal system whatsoever and assume that a sentence A is a deductive consequence of a sentence B iff “If B, then A” is analytic (standard qualms about analyticity aside). So in general, the logician’s conceptual framework is full of notions more precise than “deductive”, and the word “deductive” seems unclear and a tad outdated.

But let us even suppose we fix on the notion of being deductive as being validated by classical logic (this seems to be the best you can do if you want to make it easy for the cognitivits to argue that deductive inferences are uninformative). Why on earth would you think that deductive reasoning can only give you boring and useless consequences that you already were aware of, unless you say so because what you take to be the most prominent example of a deduction is one of the slightly obvious syllogisms, most likely employing Socrates and his mortality?

The thing is, human beings are not logically omniscient (I myself, for instance, often feel dumb when I stare at a deductive proof I can’t grasp after half an hour). In fact, the history of mathematics is a good source of examples where prima facie well-understood premise sets led to surprising consequences. Just because the truth of a conclusion is guaranteed by the truth of the premises doesn’t mean that once we believe the premises we actually are aware that they lead to this conclusion. Take the Russell’s paradox. A rather bright dude named Frege spent years without noticing a fairly simple reasoning whose conclusion was to him somewhat surprising. Take Godel’s incompleteness theorem(s). A rather known set of mathematical truths together with a bit of slightly complicated deductive reasoning led to one of the most important discoveries in the 20th century logic, which stunned a bunch of other not-too-dumb mathematicians. If you still think that deductive inferences give nothing but boring and obvious conclusions, think again!

Two points about the opposition between the deductive and the inductive. First of all, unless you define inductive as non-deductive, the distinction is not exhaustive. For instance, if inductive inferences are supposed to be those that lead to a general conclusion, we’re missing non-deductive inferences with particular conclusions (like in History, one uses certain general assumptions and knowledge about present facts to surmise something particular about the past). In this sense, the deductive-reductive distinction introduced by the Lvov-Warsaw school sounds a bit neater (look it up).

Another thing is that people often speak of inductive inferences as if they didn’t have anything to do with deduction (the following point was made by Frederik). Quite to the contrary, certain facts about what is deducible and what isn’t lie always in the background when you’re assessing the plausibility of an inductive inference. For instance, you want the generalization you introduce to explain certain particular data you’re generalizing from, and one of the most obvious analysis of explanation uses the notion of deducibility. Also, you don’t want your new generalization to contradict your other data and other generalizations you have introduced before: but hey, isn’t the notion of consistency highly dependent of your notion of derivability?

Having said that, I also have to emphasize that this doesn't mean that I take non-deductive inferences (whatever they are) to be uninteresting; indeed, the question of how we come to accept certain beliefs other than by deducing them (whatever this consists in) from other beliefs is a very hard and interesting problem. What I oppose to, rather, is drawing cut and dry lines between these types of reasoning and saying that only one of them is interesting.

Saturday, August 22, 2009

A book on adaptive logics in progress...

Diderik Batens is working on a book about adaptive logics. He made drafts of first few chapters available online and invites comments. Here.

Monday, August 17, 2009

Frames, Frames and Frames

1. The paper on dynamic frames has been accepted and is forthcoming in the Logic Journal of the IGPL. As I understand their self-archiving policy, it can't be publicly accessible for 12 months after it's published by OUP. Hence, I'm making the final version available now, it'll be available till the official publication. If you feel like grabbing it before it disappears, it's here.

2. In the same vein, in Ghent this Friday (August 21) we're having a mini-workshop on frame theory. If you're around at that time, feel free to swing by. There's gonna be an outing afterwards.

Title: Frames, Frames and Frames

Time: Friday, August 21. 17:00-19:00 (There will be three talks, 30 minutes each + discussion)

Place:
Room 2.19, Centre for Logic and Philosophy of Science, Universiteit Gent, Blandijnberg 2

Talks:

1. Capturing dynamic frames. It's based on the paper I just mentioned: I explain what frames are, how certain frames can be expressed by sets of first-order, formulas, and how an adaptive strategy can be applied to a reasoning with a conceptual framework when faced with an anomaly.
2. Induction from a single instance and dynamic frames. It reports the content of a joint paper with Frederik Van De Putte; basically, we discuss how the background knowledge needed for a distinction between plausible and implausible cases of induction from a single instance can be formulated within frame theory, and how the theory provides a nice framework for talking about this sort of reasoning as relying on certain second-order inferences.
3. Similarity and dynamic frames. I'm talking about Bugajski's algebraic semantics for similarity relation, indicate its weaknesses, and provide a relational semantics that's simpler and which satisfies more of Williamson's requirements for 4-place similarity relation. Then, I discuss Bugajski's argument to the effect that interesting similarity structures can be generated by a set of properties only if those properties aren't sharp. To criticize it, I describe how non-trivial similarity structures can be generated by sets of sharp properties, if these are viewed within the framework of dynamic frame theory.


Monday, August 10, 2009

NCM 09 (part 2)

... and the postponed report on Non-Classical Mathematics 2009 continues...

The second talk was given by Giovanni Sambin. He talked about his minimalist foundation and about a way constructive topology can be developed over a minimalist foundation. It's quite interesting to see how much stuff can be done constructively. Also, Giovanni is a devoted and areally charming constructivist. I was chatting with him at a pub one night and only by finding myself almost converted to constructivism I knew it was time to go home.

By the way, among many inaccurate things that are being said about Godel's theorem (like these) you can find a remark that Godel's incompleteness and undefinability proofs/theorems don't work in intuitionistic mathematics. Actually, they do. And the person to talk to is Giovanni, who worked out all the details making sure everything is constructive.

Arnon Avron talked about a new approach to Predicative Set Theory. Roughly,the underlying principles of the predicative mathematics are:
I. Higher-order constructs are acceptable only when introduced through non-circular definitions referring only to constructs introduced by previous definitions.

II. The natural number sequence is a well understood concept and as a totality it constitutes a set.
It is well known that Feferman has pursued the project and has shown how a large part of classical analysis can be developed within it. The system, however, is not too popular, partially because it uses a rather complex hierarchy of types, which makes the theory more complicated than, say, ZFC.

Arnon Avron discussed an attempt to simplify predicative mathematics by getting rid of the type hierarchy and developing a type-free predicative set theory. The idea is that the comprehension schema is restricted to those formulas that satisfy a syntactically defined safety relation between formulas and variables. The relation resembles a syntactic approximation to the notion of domain-independence used in database theory, and the intuition is that acceptable formulas define a concept in an acceptable way independent of any extension of the universe.

A replacement for Journal Wiki

Here is a new database (by Andrew Cullison) gathering data pertaining to philosophy journal experience. I mentioned its being in preparation before. Now it seems to be up and running (although, the journal wiki data import is yet about to happen). It is certainly more user-friendly than its predecessor.

Tuesday, July 28, 2009

Tahko's paper on modal epistemology online

I see Tuomas Tahko, besides posting a bunch of pictures from his recent trip, posted also his paper on modal epistemology. It's quite interesting. Title and details below.

Two-Dimensional Modal Semantics, Conceivability, and Modal Epistemology

ABSTRACT The combination of two-dimensional modal semantics and conceivability purports to be very powerful: it upholds modal rationalism, explains a posteriori necessity, and even accounts for metaphysical impossibilities—all this while committing to only one modal space, conceptual modality. In this paper I will examine whether two-dimensional modal semantics and conceivability can produce a complete account of modal epistemology and argue that they cannot. We will see that the framework fails to account for metaphysical modality or to deal with metaphysically substantial, essentialist statements because it is unable to distinguish between trivial and substantial modal truths.

Saturday, July 18, 2009

Leszek Kołakowski has passed away

Leszek Kołakowski, an important figure in political philosophy and an interesting Polish thinker has passed away.

Brian Leiter, despite his severe criticism of Kołakowski actually cared to post a short note about this, too. [pointed out by Ziel of the Polish blog Jakies Przepisywania z Prasy Wszelakiej fame in personal communication]

Friday, July 17, 2009

Wednesday, July 15, 2009

Leitgeb, "about", Yablo (again)

A paper I already mentioned has been accepted and is coming out in Logique et Analyse soon. I'm keeping the copyright and I like open-access stuff, so the most recent version is available here. Title and (updated) abstract below:

Leitgeb, "about", Yablo

Leitgeb (2002) objects against the clarity of the debate about the alleged (non-)circularity of Yablo's paradox, arguing that there are actually two notions of self-reference and circularity at play. One, on which Yablo's paradox is not circular, is defined via the reference of the constituents of a sentence, and another, on which the paradox is circular, is defined via syntactic mappings and fixed points. More importantly, Leitgeb argues that both definitions aren't satisfactory and that before we can undertake a serious debate about the circularity of Yablo's paradox we first need to clarify the notions involved. I will focus on Leitgeb's criticism of the first definition and will argue that the problems arise not as much on the level of our definition of circularity as on the level of our definition of reference of sentences (aboutness). Leitgeb's main worry is the failure of a requirement called `Equivalence Condition', which says that if a formula is self-referential, any formula logically equivalent to it should also be self-referential. I will argue that preservation under logical equivalence is unreasonable with respect to self-reference, but is indeed needed with respect to aboutness. Since Leitgeb's own tentative notion of aboutness doesn't satisfy the requirement, I will suggest another approach which fixes this problem. I also explain why the intuitions that circularity should satisfy the equivalence condition are misled. Next, I argue that the new notion of aboutness is not susceptible to slingshot arguments. Finally, I compare it with Goodman's notion of absolute aboutness, emphasizing those features of Goodman's approach that make his notion inapplicable in the present discussion.
I would like to express my gratitude to all the people who discussed earlier versions of this paper with me: Hannes Leitgeb, Jeffrey Ketland, Karl Georg Niebergall, Diderik Batens, Joke Meheus, Maarten Van Dyck, Stefan Wintein, Martin Bentzen, Christian Strasser, Ghent Centre for Logic and Philosophy of Science members, and the participants of PhDs in Logic workshop (Gent 2009).

Wednesday, July 8, 2009

NCM 09 (part 1)

As promised, I begin a series of posts about Non-Classical Mathematics 2009. (I've just started using this LaTeX editor for internet, so the formulas look kinda weird, I should get used to this system within a couple of weeks).

The conference started with Greg Restall's talk titled Theories, Co-Theories & Bi-Theories in Non-Classical Mathematics. In the non-classical setting the assertion of a negation of a formula and its denial are different things. Those who accept gluts will assert negations of certain formulas without denying the formulas themselves. Those who accept gaps will deny certain formulas without asserting their negations.

Now, in a setting of a mathematical theory we're dealing with a consequence operation such that for any A and B, if A entails B, then asserting A and denying B is a clash. This generalizes to sets of formulas.



The rules we buy into unconditionally are at least these:



There are two interesting negation rules:



Both rules hold if there are no gaps and no gluts. If there are gaps, [~R] doesn't work. If there are gluts, [~L] doesn't hold, and if there are both gaps and gluts, none of the rules works. Perhaps, one might want to add other inference rules, but let's not be bothered by these issues.

Recall now that T is a theory iff for any A:

.

In the non-classical setting, if you want to avoid clash (which you want to avoid even if you allow for gaps or gluts), you should assert whatever belongs to the theory you're committed to. A theory, however, doesn't tell you which formulas you should deny (for instance, ~A belonging to the theory only tells you that you should assert the negation of A, but from this, it still doesn't follow that you shouldn't assert A itself).

Greg then goes on to introducing theory-like notions that help one not only to tell what assertions one has to make, but also what has to be denied. The first one is the notion of a cotheory. U is a cotheory iff for every A,



The intuition here is that U is a set of unassertable sentences, and the definition mirrors the fact that if something is not to be asserted, then nothing that entails it should be asserted either.

Now, combine these two notions to construct a thing that tells you what to accept and what to reject. is a bitheory iff for every A:



Again, the intuition is that T is what's to be accepted, and U is what's to be denied.

The rest of Greg's talk was devoted to applying this ideas to non-classical theories of numers, classes and truth.

Monday, July 6, 2009

Lecture notes on PA

I see Konrad Zdanowski (who worked with M. Mostowski) is now at Paris 7 and has posted his lecture notes on Peano Arithmetic. Here. Neat.

Elsevier turned to the dark side

Not a long time ago, Elsevier was shown to be quite dishonest, publishing a fake journal for money. Here's another embarassing thing about Elsevier which should convince you that open-access, independent online journals are becoming a serious alternative to the old-school venues (thanks to Brian Leiter for linking).

Change & contradiction

Over at Blog&~Blog, Ben Burgis has a nice post about Graham Priest's theory of change. He also raises certain difficulties for the theory. One of the objections is that if we admit that change involves contradiction, then Priest probabilistic argument for classical re-capture ("contradictions are rare, so we are most of the cases allowed to use classical rules, even if they aren't really valid") seems to fail. Even though I'm not a dialetheist myself, I'm still wondering how damaging this objection is, so I posted a comment with a sketch of a possible way out for the dialetheist. More remarks over at Blog&~Blog.

Sunday, July 5, 2009

A few papers reach daylight

My long-in-the-drawer mini-trilogy about doxastic synonymy and slingshot arguments has finally reached daylight, published in The Reasoner. Here (starting on p. 4), here (starting on p. 5), and here (starting on p. 4). (I started thinking about these things in 2006 in a seminar on truth given by Prof. Ali Kazmi).

I must say, my experience with The Reasoner is quite positive, and not because they accepted the paper(s), but rather because:
  • Their feedback was really quick (three weeks or so).
  • Nevertheless, I had three competent reviewers.
  • Their helpful comments were forwarded to me together with an initial R&R.
Given that there are places where your paper might be stuck for almost a year, or places that either don't justify their negative decisions or send along pretty weird reviews, The Reasoner's way of handling things is certainly praiseworthy. Of course, this result is partially obtained by severe wordcount limits; yet saying stuff in as few words as possible is quite an interesting challenge. So, if you have something interesting to say and it doesn't take too many words, The Reasoner is a venue worth considering!

One more thing, I also see that the paper on definability of identity in higher-order languages (I talked about it some time ago) is now officially available through the Australasian Journal of Logic. AFAIK (two papers with AJL), feedback time and quality are really good, and I really like its being open access.

Saturday, July 4, 2009

Rating Journals, again

Some time ago I advertised Philosophy Journals Wiki. As Douglas Portmore (who started the project) points out, it has certain disadvantages. Over at PEA Soup he discusses these things and points to an interesting attempt to replace Wiki with something more convenient.

p.s. I will get around to posting about NCM 09. Soon.

Friday, June 19, 2009

Non-Classical Mathematics 2009 (introductory remarks)

Finally, I can write something about what's going on now. I'm in Hejnice, lodged in a cell (it's quite comfortable though) in a monastery pretty much in the middle of Czech mountains. I was here two years ago, but this time I have wireless internet access. It's pretty cool.

So, what's going on? Well, it seems, Non-classical Mathematics (surprisingly??) attracted more mathematicians than philosophers and logicians. In fact, most of the people present here are rather mathematically-minded. This, of course, is not a complaint. For a philosopher (or a philosophically-minded logician, for that matter), dealing with mathematicians is a bit of a challenge though. They usually spend less time looking for philosophical motivations for their work and more time doing real mathematics. This means, if you're a philosopher, listening to mathematical talks will require more effort. You have to overcome the first impression that people sometimes get into extremally complicated technical issues without explaining why we should be interested in them. I mean, I'm pretty sure these people know what they're doing and why they're doing that, but the standards they employ for motivation of technical work are a bit different. Also, this stuff is often more complicated than most of more philosophically motivated work, so it's a bit more difficult to follow (which means, it's easy for me to feel slightly retarded when faced with all those technical results).

There are, however, also certain clearly positive aspects to this experience. For instance, a few brief conversations I had here confirm my view that doing mathematics doesn't require one to have a clear philosophical position about what mathematics is about (I mean, this is not deeply surprising, I've talked with mathematicians before). For instance, a guy who works on weak set theories, when asked about his view on what set theory is about, said cheerfully something like "I don't care, you know, it's a theory, I play around with it, prove stuff and that's it - what else would I need to know?". It's refreshing. This also means that philosophers of mathematics can still claim there's something for them to do.

This reminds me of P. F. Strawson's remark about analysis of concepts used by specific theorists:
The scientific specialist [...] is perfectly capable of explaining what he is doing with the special terms of his specialism. He has an explicit mastery, within the terms of his theory, of the special concepts of his theory [...] the specialist may know perfectly well how to handle these concepts inside his discipline, i.e. be able to use them perfectly correctly there, without being able to say, in general, how he does it. Just as we, in our ordinary relations with things, have mastered a pre-theoretical practice without being necessarily able to state the principles of the practice, so he, the scientific specialist, may have mastered what we may call a theoretical practice without being able to state the principles [...] a mathematician may discover and prove new mathematical truths without being able to say what are the distinctive characteristics of mathematical truth or of mathematical proof [...] even operating within his own specialism, a specialist was bound to employ concepts [...] from the fact that he there employs them quite correctly, it by no means follows that he can give a clear and general account or explanation of what is characteristic of their employment in his specialism. [Analysis and Metaphysics]
There is a downside to it. If you discuss philosophical aspects of mathematical concepts, mathematicians quite likely won't give a rat's ass about it. I mean, it's to be expected, just like you don't expect a competent kettle user to be interested in someone's specification of sufficient and necessary conditions for something to be a kettle (or an ink-spiller to be interested in philosophically interesting ways of spilling ink). Just like a mathematician might have a hard time convincing philosophers that the complex questions he's trying to answer actually matter, a philosopher might have a hard time convincing mathematicians that philosophical considerations about mathematics have some relevance.

Of course, there is no clear-cut distinction between the mathematicians and the philosophers. I presented a very simplified sketch of some aspects of the extremes of a very interesting and often fruitful tension.

Anyway, all this seems to have some bearing on the Burgess-Rosen critique of nominalist reconstructions of mathematical theories. The gist of the critique is this: when you give a reconstruction, you either give something different from what mathematicians actually have in mind, and thus, you put forward a revolutionary view of mathematics (which is highly impractical, because you're suggesting new textbooks have to be written, mathematics in schools should be changed, etc.), or you are claiming that your theory is an actual analysis of what they're doing, and then you have to show that this really is what they have in mind. Now, it seems to me that mathematicians usually don't have anything philosophical in mind at all when they're doing mathematics, just like we don't have a correct analysis of our every-day concepts when we use them. Thus, giving a nominalistic reconstruction is neither a suggestion that mathematics should be revised (in fact, I believe, a correct nominalistic story about mathematics should rather suggest that everything's okay with mathematics and it shouldn't be changed), nor a theory of what mathematicians have in mind. It's rather a proposal as to how a philosopher can make sense of mathematical activity and mathematical truth without being commited to abstract objects. And what would making sense consist in? Well, telling a nominalistically acceptable story which would be consistent with one's philosophical views and which would allow one to understand on the philosophical level how mathematics can be true and yet applicable. Sort of.

Having said all this, I will switch back to the reporting mode now and post some more detailed remarks on the content of the talks some time soon.

FMER, Leuven, June 10-12, cont'd

Day 3, after lunch

Lara Buchak (joint with Branded Fitelson who couldn't make it to the conference) -Is it rational to have faith? - Lara was trying to cash out what having faith commits one to, and on the analysis she presented faith in X requires that one not actively look for further evidence for the truth or falsity of X. This move seems to collide with the expected utility theory of rationality. Then, she argued that the claim that expected utility maximisers should always perform cost-negligible experiments neglects the phenomenon of risk aversion. It turns out that for individuals who take risk into account in a certain way it is sometimes rational to refrain from gathering further evidence.

One issue was raised, if I remember well, by Joshua - namely, if this is the way you understand faith, Richard Swiburne doesn't have faith, for he actively looks for evidence pertaining to the truth of religion. Perhaps (now that I think of it) this can be circumvented by saying that faith in X would require one not to actively look for further evidence for the falsity of X (it depends how one construes Swiburne's thought, but one way to see this is to think that he does actively look for evidence in support of religion, but doesn't actively look for evidence against religion).

Another issue is that the notion of evidence in/against religion is quite elusive. Lara used an example of prayers: pray for something and see if you get it. But it seems that (at best) what you're testing this way is the conjunction of some of your religious beliefs and the claim that your will agrees with the will of God). Also, what counts as test or evidence for or against religion is highly theory-dependent. You can go Swinburnian about this and think that no actual event whatsoever is evidence against religion, because given certain considerations, everything that is happening should be happening if (his version of) theism is true. You can go more Tooleyan about this, and count every event that prima facie should not happen as evidence against the truth religion. Perhaps it's just me being confused, but I don't think we have a good understanding of a test that both a theist and an atheist would agree upon, so that the cost of its performing is negligible (given the negligibility requirement, for instance, Hick-style die-and-see-what-happens is out of the question).

The conference started with a slightly apologetic talk by Swinburne. It ended with a rather atheistic talk by Herman Philipse. He gave a series of short arguments against the claim that there is a C-inductive argument from the Big-bang to the existence of God. Briefly,i f h is theism, e is the occurence of big bang, and k is tautological background knowledge, Swinburne argues that Pr(e|h&k)>Pr(e|k). The first point that Philipse makes is that given that God would want to create humans and the fact that given the cosmic singularity, the probability that it will result in there being humans is quite low, it seems that Pr(e|h&k) is very low. Another point raises pertaining to Swinburne's claim that Pr(e|k) is very low. Since Pr(e|k)=Pr(e|h&k)Pr(h|k) + Pr(e|~h&k)P(~h|k) we need to know the prior probability Pr(e|~h&k) and Pr(~h|k). Philipse argued that Pr(e|~h&k)> Pr(e|h&k). He also attacked Swinburne's use of simplicity criterion. Since this was directed against Swinburne who was present, quite an interesting discussion followed.

As for Bayesian-style arguments for/against God's existence, I'm rather sceptical. The problem is, even if the math adds up, they all rest on primitive assessment of probability of things like "big bang occurs" relative to the existence of God, or relative to the negation of his existence, and many other probabilities of this sort. When asked questions like "what's the probability that intelligent beings like humans exist given the hypothesis that there is no god and no multiverse?" or "What's the probability that Big Bang occurs given the hypothesis that God exists", I'm just inclined do say: I have no idea. I would love it if (a) someone explained to me the notion of probability at play, and (b) showed me how on this notion of probability, the probability claims involved can be assessed without hand-wavy and practically untestable extra assumptions. Perhaps I'm just a frequentist and haven't seen too many worlds being created. My bad.

FMER, Leuven, June 10-12, cont'd

Day 3, before lunch

Edward Wierenga talk titled Developing Molinism employed fairly complex modal stuff (you know, actuality, counterfactuals and all that) to help formulate Molinism, the view that God has a knowledge of propositions that are intermediate between being necessarily true and independent of God's will or creative ativity, and contingently true propositions dependent on God's will. These are contingent true propsitions not dependent upon God's will (in the intended interpretations: propositions about future but free actions of men).

This knowledge is often taken to be a knowledge about certain counterfactuals (like "If Adam were placed in the Garden of Eden, he would freely eat the forbidden fruit"). This knowledge would assist God in devising the world so that it is the best world possible without interfering with human free decisions. The technical problem is that it's difficult to find right truth-conditions for counterfactuals of this sort which satisfy all the desiderata. Wierenga first discussed his original view (that Plantinga's conditionals of world actualizations can do the job) and criticized it, and then presented another suggestion, employing tense considerations.

Paul Bartha talking about Many gods, many wagers, discussed in detail the many-gods objection against Pascal's wager. He then argued that given the evolutionary stability condition on probabilistic reasoning (roughly, the condition is that after making a bet, no further probability considerations of the state of affairs after making the bet will make you change your mind) the many-god objection doesn't raise any difficulties that the classical version of Pascal's argument already encounters.


David Glass - Can evidence for design be explained away? - An obvious way to counter a design argument is to provide an alternative explanation. For instance, evolution theory is taken to render inconvincing certain design arguments. The problem is, certain version of design arguments are compatible with alternative explanations - why accept both? Well, don't if there's no need to do that! The technical question, however, is when one explanation is good enough to render the other redundant. Are there cases where it is better to accept both explanations than only one of them?

David addresses these issues within the Bayesian framework. Even if two explanations are marginally independent they typically become negatively dependent when one conditions on the evidence they explain. So, if one explanation is found to be true, this lowers the probability of the other explanation. There are, however, two important possible outcomes. Say a design hypothesis D has a certain prior probability Pr(D). Next, suppose it receives confirmation from evidence E so that Pr(D|E)>Pr(D). Then, we find out that an alternative explanation A is true. This undermines Pr(D|E&A). In the first case, Pr(D|A&E) is not higher than Pr(D), and so the initial confirmation of D by E has been completely negated. In the second, Pr(D|E&A) is higher than Pr(D) but lower than Pr(D|E). Given than only the first kind of outcome counts as explaining away, it turns out that it is very difficult to come up with an alternative theory that completely explains away the evidence for design.

FMER, Leuven, June 10-12, cont'd

This is getting overwhelming. I haven't finished posting about FMER and I'm already at another conference (Non-classical mathematics) that I also would love to blog about. I'll do my best to complete the FMER mini-series as soon as possible.

Day 2, after lunch

Alan Hajek talked about Blaise and Bayes. He first surveyed a few variants of arguments usually given as reconstructions of Pascal's wager in terms of "dominance" and "expected utility". It was fun, especially since he also showed that they're invalid, for some slightly surprising but equally obvious reasons. He discussed certain emendations that can be made to salvage the wager.

Joshua Thurow's talk was titled Does religious disagreement actually aid the case for theism? Disagreement trailblazing for the miraculous. He pointed out that disagreement about an inferentially-based belief may not automatically force one to suspend judgment en block. Divide the evidence for and against religions into two sets: A - the testimony to the occurrence of miracles, B - everything else. Suppose there is enough disagreement about the evidence in B that considered alone B supports suspending judgment in all religious belief. Then, using Bayes's theorem, Joshua argued that if A includes even moderate testimonial evidence for the occurrence of a miracle, then A and B together support whatever theositic religion is most supported by the testimony in A.

Michael Tooley discussed The probability that God exists. He employed Canapian-style structure-description approach to inductive logic to arrive at an upper bound on the probablitiy that God exists given only the information that the world contains n events each of which is such that in the light of the totality of known rightmaking and wronkmaking properties, it would be morally wrong to allow the event in question.

Given that there are n such events and that there are k unknow morally significant properties, the probability that none of those n actions is wrong all things considered, argues Tooley, is less than (k/k+1)(1/n+1). So, he argued, the probability that God exists must be less than 1/n+1.

One idealizing assumption that Tooley seems to take is that the total moral status of an action is assessed in terms of the number of morally relevant properties (rightmaking vs. wrongmaking), known or unknown. I think it's unlikely a theist would buy into this: they might insist that some (especially unknown) properties are more important and mere counting them (especially since it's not really obvious how you individuate properties so that counting makes sense) won't help to assess the moral status of an action.

Monday, June 15, 2009

Time Travel paper online

At FMER I had an opportunity to chat with Michael Tooley about time travel. Two years ago I had this paper about Tooley's example of loopless time travels and conditional logics. Michael put forward this example to indicate that if Lewis-Stalnaker semantics for conditional logics is adequate, then there are impossible cases of backward causation even without causal loops. Later on the argument was interpreted as an argument against the adequacy of conditional logics from the possibility of time travel (I recall that seemed to be the interpretation of Charles Cross, I was commenting on his talk at WCPA 2006 in Vancouver).

My point was that the impossibility of the situation described not only follows from basic assumptions of LS semantics, not only can be proven syntactically as holding in many conditional logics (that was Charles' observation), but also can be proven using fairly weak assumptions, weaker that those of Charles, and that the possibility of the situation is not very intuitive to start with (thus I rather sided with Michael, emphasizing that even without causal loops time travel can be tricky).

The chat reminded me about this paper, so I dug it up and posted to my academia profile. It's here.

FMER, news from the trenches (cont'd)

Here's Day 2, before lunch. *

The day started with a talk by Benjamin Jantzen titled Peirce on Miracles: The Failure of Bayesian Analysis. Benjamin started with a brief explanation of Hume’s criticism, according to which no testimony could be sufficient to justify belief in a miracle, given that the probability of fraudulent or mistaken testimony is always greater than the probability of the miracle occurring. He then considered Hume’s argument as an instance of Bayesian probabilistic inference. The basic idea is that the probability of a miracle having occurred given various testimonies to that effect is computed from the probability that each witness would report accurately given the occurrence of the miracle, the joint probability of the occurrence of such a collection of testimonies, and the antecedent probability of the miracle. Finally, Ben argued that given the Peircean criticism of the Bayesian approach, the probabilistic analysis of this sort is seriously flawed. The main gist of the criticism is that:
  • There is no such a thing as the objective veracity of a witness. To apply the Bayesian method we need to know the probability that a witness judged accurately and told the truth in a particular instance. The details of that instance cannot be replicated even in principle, so we have no class of sufficiently similar events to build a sample space (this objection hinges on Peirce’s frequentist account of probability).
  • Even if we grant such a thing exists, it doesn’t satisfy the independencies required by the method of balancing likelihoods employed in the argument. What leads one witness into error tends to lead others into the same error.
  • History tends to preserve only the positive assertions of the extraordinary and biases the computed posterior probability. When we hypothetize the occurrence of a miracle on the basis of some set of testimonies, we are not rational in using those same testimonies to determine the probability that this hypothesis is true.
The main positive lesson is that after constructing an abductively valid hypothesis we should gather independent data for an inductive phase.


Lydia and Tim McGrew talked about The Reliability of Witnesses and Testimony to the Miraculous. They started with Condorcet’s formula for the probability of an event:
pt/pt+(1-p)(1-t)
where p is the antecedent probability of the event and t is the reliability of truthfulness of the witness. Then, they tracked subsequent changes that led to the formation of Bayes’s Theorem. Indeed, Condorcet’s account seems like a particular instance of B’s theorem, given the similarity between his formula and:
P(H)P(tH|H)/P(H)P(tH|H)+P(~H)P(tH|~H)
In particular, Condorcet’s formula is a special case resulting from three limiting assumptions.

  • The witness is equireliable – he is equally likely to tell the truth about H regardless of whether it occurs.
  • The witness is forthcoming – he would not be silent on the subject of H had it not occurred.
  • Testimony regarding H is restricted to YES or NO.
Lydia and Tim followed the history of the debate surrounding testimony and miracles (Babbage, Reid, Bentham, Hume, Campbell, Venn, Holder, Earman) , showing how it led to the rejection of these assumptions.

Finally, they argued that the move from Condorcet’s formula to Bayesian factors is correct and that the factors should not be construed as modeling witness’ reliability, but rather a function of a wider range of epistemically relevant factors.


* If you're one of the speakers and think your view is misrepresented, drop me a line.