Evening Lectures

All evening lectures are held in the Aula Magna D0.01, start at 19:00, and last around 60 minutes.
  • Tuesday, August 16, 2016: Larry Moss, Indiana University (United States of America), EMCL visiting scholar at the Free University of Bozen-Bolzano. Chair: Enrico Franconi.
  • Thursday, August 18, 2016: Marco Baroni, University of Trento (Italy). Chair: Raffaella Bernardi.
  • Tuesday, August 23, 2016: Verónica Becher, University of Buenos Aires and CONICET (Argentina), EMCL visiting scholar at the Free University of Bozen-Bolzano. Chair: Philippe Schnoebelen.
  • Thursday, August 25, 2016: Louise McNally, Universitat Pompeu Fabra (Spain). Chair: Gemma Boleda.

Titles and Abstracts

Larry Moss, Indiana University

Natural Logic

Tuesday, August 16, 19:00


Abstract: Much of modern logic originates in work on the foundations of mathematics. My talk reports on work in logic that has a different goal, the study of inference in language. This study leads to what I will call “natural logic”, the enterprise of studying logical inference in languages that look more like natural language than standard logical systems. I will sketch the history of this field, and I also will try to make as many connections as possible to courses at this year’s ESSLLI and to previous schools. I also will discuss computer implementations of natural logic.

 

Marco Baroni, University of Trento

Will computers ever be able to chat with us?

Thursday, August 18, 19:00


Abstract: The last few years of research in Artificial Intelligence have seen the incredible success of so-called end-to-end systems, that can learn to perform difficult tasks just by being exposed to examples of the relevant raw input data and the corresponding output. In computer vision, such systems can now distinguish pictures of Norfolk terriers from those of Yorkshire terriers and thousands of other categories. In Game AI, they were recently able to beat one of the top human Go players. Even within the realm of language, end-to-end systems can tackle challenging tasks such as machine translation with performance comparable to that of heavily engineered, manually assembled pipeline systems. And, yet, these last-generation end-to-end systems are still utterly failing at the task of having meaningful conversations with us. But failing the conversation challenge, I would argue, means failing at language in general, because language, by its very nature, is an interactive tool. Children do not learn to speak (just) by sitting in front of a TV screen; and direct or device-mediated conversation is the most common way in which grown-ups use language in everyday life. Moreover, a short conversational exchange is the fastest way, for children as well as for grown-ups, to acquire new words or new skills: for example, when we request and are provided explicit instructions about how to accomplish a task. In my lecture, I will present some conjectures on why linguistic interaction is so much harder than the interaction taking place in a match of Go, and I will suggest some research directions we should pursue in the next few years if we want to finally enable computers to chat with us.

 

Verónica Becher, Universidad de Buenos Aires & CONICET

Randomness!

Tuesday, August 23, 19:00


Abstract: Would you believe that these three sequences were obtained by tossing a fair coin, writing 0 for heads and 1 for tails?
111111111111111111111111111111111111111111…
01001000100001000001000000100000001000000001..
100101010110001101110100010010101111001001..
Everyone has an intuitive idea about what is randomness, often associated with the “gambling” or “luck”. We say “Lady luck is fickle“ because it is impossible to predict, it is lawless, it lacks patterns. Thus, heads and tails should occur with the same frequency in the limit, and the same should hold for combinations of heads and tails; otherwise we could guess right more than we could fail.

In this lecture I will explain and give formal answers to the following questions.

What is the definition of randomness?

A random sequence is indistiguishable from independent tosses of a fair coin. There was no satisfactory definition until the mid 1960s. Per Martin-Löf gave one (1966), Gregory Chaitin gave another (1975), both were based on the notion of algorithm. The fact that they were shown to be equivalent was decisive to consider it the right definition of randomness.

Can a computer output a purely random sequence?

The famous quote “Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin”, John von Neumann 1951, gives us the answer: No.

Are there degrees of randomness?

Randomness implies lack of regularities, lack of patterns. Pure randomness is defined as incompressibility by a Turing machine. But it is possible to consider different kinds of machines, as finite automata, pushdown automata, plain Turing machines or Turing machines with oracles. Different compression abilities yield different degrees of randomness.

What is the most elementary form of randomness?

It was defined by Emile Borel in 1909, he called it normality, and it just requires equifrequency of all blocks of digits of the same length. Normal sequences are those that are incompressible by finite automata. (My work is devoted to the construction of normal sequences with selected mathematical properties.)

What is the relation between randomness and Language?

A purely random sequence is one that, essentially, can only be described explicitly. This is formalised by Kolmogorov/Chatin’s complexity.

What is the relation between randomness and Logic?

The relation is based on the paradox “The first non interesting positive integer”, which is a rather interesting number, isn’t it? This paradox manifests that the majority of numbers are non interesting or random, although it can never be proved in particular cases. Randomness yields an incompleteness result for arithmetic in First Order Logic, analog to Gödel’s Incompleteness theorem (based on the liar’s paradox).

What is the relation between randomness and Information?

Chaitin’s complexity is formally identical to Shannon’s information theory. Randomness is equivalent to maximal information content.

 

Louise McNally, Universitat Pompeu Fabra

Meaning at a crossroads

Thursday, August 25, 19:00


Abstract: The lexical semantics of content words has historically generated comparatively little interest among formal semanticists, except when a class of content words proves to be sensitive to some sort of logical or grammatical phenomenon. There are some obvious reasons for this: the tools of formal semantics are poorly suited to dealing with the messy details of the lexicon, and many of these details — specifically, those not relevant for logical inference or grammatical phenomena — are not considered part of what a semantic theory should have to account for. Moreover, natural language has provided plenty of problems for semanticists to work on (indeed, with great success), even without getting into the lexicon.

In this talk, I reflect on the some of the negative effects of this situation and on my experience when I turned to distributional models, a completely different set of tools from those in which I was trained and with which I have worked for many years, in an effort to better address the interaction of lexical and compositional semantics. In part, the talk will serve as a methodological lesson in how easy it is to forget the adage that, when one only has a hammer, everything looks like a nail. In part, it will provide me with an opportunity to point to some largely ignored paths that I think researchers concerned with natural language meaning should explore.