4292019 Chomsky Lecture w QA

Noam Chomsky is absolutely a towering actual figure in our era and in any era he's had enormous impact on the fields of linguistics philosophy cognitive science and by extension child psychology and also he's had great prominence as a political commentator and activist he's published I think over a hundred books and countless articles Noam began studying linguistics in the 1940s he was a student at Penn University of Pennsylvania where he ended up studying with zellig harris one of the most prominent figures in american structuralism from 1951 to 55 he spent some time at Harvard at the Society of fellows where he worked on what ultimately became a major major unpublished book for many years the logical structure of linguistic theory it used to circulate in Mamiya graft form and eventually was published I think in 1975 after his time at the Society of fellows he moved along Massachusetts Avenue to MIT where he taught for most of his career in terms of landmark publications aside from Ella the logical structure of linguistic theory that I just mentioned maybe some of the touch tones within touchstones with in linguistics have been the publication of syntactic structures in 1957 which was for many people their first introduction to generative transformational grammar his 1965 book aspects of the theory of syntax fused his theory of generative grammar with new approach to linguistics in general that integrated proposed to integrate linguistics into the cognitive sciences more generally I realize I should have also mentioned in this context his 1959 review of BF Skinner's book verbal behavior which had enormous influence not only among linguists but also among psychologists who are interested in the representation of knowledge of language in the mind and was in by many people considered one of the landmark events in the decline of behaviorist psychology as opposed to more cognitive psychology that we see today beyond that his 1981 book lectures on government and blinding reinvented the field that came along at the time when I and many of my contemporaries were at MIT but one of the amazing things about nomás that he's always reinvented the field again and again this happened again in the 1990s with the development of the minimalist program and the minimalist approach to linguistic theory has undergone repeated revisions many of them quite fundamental as norms ideas have developed and progressed so you're all here to hear him and not me so without further ado please join me in welcoming Noam Chomsky [Applause] what I'd like to do in these lectures which is actually just one continuous talk break it up into parts and get as far as we can up to contemporary work and problems if we make it I'd like to discuss the state of the generative Enterprise as it's been called by some of its leading practitioners what's been accomplished what the problems are what we can hope to see in the future from the origins of this initiative which incidentally revived a tradition that had long been forgotten and was unknown at the time but the throw from the origins the holy grail was genuine explanations of fundamental properties of human language of the Faculty of language and that's not such a simple matter to capture properly and to the extent you can it's been an elusive goal and I think the present moment is unusual in the history of the long history of the field 2500 years and that that goal I think seems perhaps Within Reach and if that's the case it would be a matter of no slight significance not just for linguistics but yeah well these are the questions I'd like to explore in this extended lecture so to begin with we have to clarify some basic questions highly contested questions what the field is about what's the nature of the enterprise I've personally always found it helpful to rethink these matters over and over I hope you will too so to begin with let's begin with the simple scent what sounds like the simplest question namely what is language well that question is plainly consequential the answer to it will determine what we focus on what kind of work we do how we proceed what counts as a result then critically what counts as an actual explanation a general genuine explanation there there have been many proposed answers over the years they differ in interesting ways and if we think about it a little the question turns out to be not so simple so suppose for example we asked the question and some other discipline let's say physics we asked what is the nature what is the physical world what is energy what is mass what has work any such question the answer that we'll get is some technical definition internal to an explanatory theory so we won't get an account of what people intuitively think of as the physical world or think about energy and so on that's not to the point we'll find answers within a particular explanatory theory I suppose we asked biologists what is life there it'll be a little bit more ambiguous because the theoretical understanding has not reached the point where it's obvious what the essential conceptual notions are so it's exploratory suppose we ask what is thinking well here it gets a little more comfortable not jazz you know the question was posed by Alan Turing in famous paper in 1950 which initiated the field of artificial intelligence and he's papers about whether machines think and he starts off by saying that the question is too meaningless to deserve discussion so he's not going to discuss it because of the notion thinking is so vague and amorphous that you can't give a a response in the manner in which you might and say physics or even biology when he's asked what thinking is he says it's some kind of buzzing in the head but nothing much more to say than that so what he does is something quite different he proposes a notion which he says might be somewhere within the range of what people call thinking and maybe it's a useful notion he suggests that it is and in particular it might stimulate the development of new software new machines and it's the famous imitation game so-called Turing test well so let's go on and notice that when you ask the question what is thinking what is language what is meaning what are what is belief and so on the answers that you get are really what philosopher child Charles Stevens once called a persuasive definitions saying here's what I think is interesting in the general domain of this Luce notion here's something I think it's worth looking at well you go back to the Turing test notice that it's not an attempt to explain and understand anything about meaning rather thinking it's about an attempt to stick to simulate some of the aspects of thinking that's quite a crucial difference didn't seem so crucial in Turing's day but now it's highly crucial since a good part of what goes on in the study of language and cognitive science the up-north the Silicon Valley version of this is basically simulation not efforts to understand and explain and in fact that's the direction that a iin deep learning have taken there's a lot to say about that but I'll put it aside unless it comes up later well so when we asked about these things we were we're basically told here's what I think is interesting okay then the next question is is it interesting is it a sensible choice and if it is a sensible choice how can you proceed to place it within the framework of some kind of explanatory theory and it's insofar as you can do that you can discuss the validity of the concept that's proposed other than that there is lively debate about what is language what is meaning what is belief but it's basically here's my preference it's not a clear thing that you can give an answer to you can ask if the preference is a sensible one where can we develop it and so on but there aren't questions of sort of validity or invalidity sometimes it's useful to develop a new technical term to make that clearer that's what I'll be doing here so let's go back to the question what is language and have a look at some of the preferences over the centuries I think if you look you can roughly say that they fall into two major categories one approach to what is language s considers the concept that we're focusing on to be something internal to a person so my language is something that's up here it's the buzzing that goes on up there on Turing's terms that's one concept of language the other concept of line is it something external to any person which people make use of somehow then I think we can terminal is often imprecise but I think you can roughly see this distinction and what kind of work we do and how its evaluated all that's going to depend crucially on which of these enterprises is undertaken so a classic illustration of the first kind language as an internal object is I think one of the best clearest exponents of this is great linguist out of yes person about a century ago he was actually the last representative of a long tradition so for yes personal quote him a particular language is a system that comes into existence in the mind of a speaker on the basis of finite experience this internal system in the mind yields a notion of structure that is definite enough to guide the speaker in framing sentences of his own occur Sallee what yes wasn't called free expressions that are typically new to the speaker and the hearer and then there's a more general conservative linguistic theory and that is to discover what he called the great principles that underlie the grammars of all languages that's not generalizations about them but the principles that underlie them that this that's the first approach regards language as a property of the person the second approach is illustrated by the structuralist behavioral list approaches to language of the first half of the 20th century still of course continuing that took language and that's the object of study to be is a a corpus of materials that a field worker would elicit from an informant or perhaps an infinite set of sentences or some other entity that's external to people so if you look at the actual formulations for disis order of structural linguistics a language is a kind of a social contract in a community some collection of word images in the minds of the people of the community go to the leading American linguist of the early half of the 20th century leonard bloomfield languages this asked the question what is language language is the set of utterances that can be spoken in a speech community so something out there go to philosophy of language perhaps the leading philosophers most influential philosopher of language of the mid 20th century then Quine language is a fabric of sentences associated with one another and with stimuli by the mechanism of conditioned response elsewhere instead a language is an infinite set of sentences which people use david lewis another influential philosopher took the same view and his important article language and linguistics language is an infinite set of sentences used by a population both Quine and Lewis very good logicians incidentally both concluded that while it makes sense to say that our population uses this infinite set it doesn't make any sense to say that there's a particular way of characterizing the set that in fact coin said would be folly to look for that these these are if that's what language is what's linguistics we'll linguistics naturally would be a way of taking data however you get it typically from an informant applying various procedures and methods and getting an organized form of that data the most sophisticated version of this was as Tim Stoll mentioned Zelig Harris's methods in structural linguistics in Europe prove its cause principles of phonology was constructed on similar grounds well that's that's these are all this characterizes almost completely the structuralist behaviorist approach the language there are something kind of paradoxical about it so what is this what are these entities what is the set of sentences spoken in a speech community how can you how can members of a population use an infinite set unless they have some way of determining what's in the set or out of the set in fact how can we even coherently talk about an infinite set unless we have a method of characterizing it so it seems to me at least that this the approach of leading philosophers and logicians was kind of confused it's really the opposite you have to be if you want to talk about an infinite set you first have to discuss the what the internal mechanism for characterizing that said what's been called an I language in internal and modern terms well whatever these ideas are supposed to mean from the structuralist behaviorist period which i think is not easy to answer but whatever is there's something external to people which people have some relation to now that by no means has ended so right up till the present there are strong currents that take very similar views and I think one can ask the same questions about them including within roughly speaking the generative Enterprise well suppose instead we adopt yes persons view which I think which I will do then the linguist is studying something that's in the mind of the speaker namely the mature state that has been attained by that has yes persons terms come into existence and also the innate endowment of the speaker the Faculty of language which makes possible the which first of all it determines what yes persons called the great principles that underlie the grammars of all languages and also makes possible the transition from finite data to the state attain to the I language in modern terms it's the as I said the mature state attained is called the I language internal language technical terms and the the the innate principles are nowadays called Universal grammar or UG that's taking a traditional term and adapting it to new a new a new context well it's the term the letter I and I language is convenient it refers to the fact that the internal language is first of all internal secondly its individual and thirdly its intentional with an S we're interested in the actual procedure of the actual algorithm not so not the set of things that it does so for example if you're studying say knowledge of arithmetic of a person say you want to know exactly how that person carries out addition you're not talking about the set of triples X Y Z such as Z is the sum of X and here too we want to understand the generative system in intention well I should say that with regard to universal grammar there's a good deal of confusion which is right up to the present which is worth dissolving it's very common to hear that Yugi has been refuted or that it doesn't exist that presumably means what people presumably mean by that is that generalizations about language have exceptions I which is of course true that's true of generalizations but that's not what Yugi is about a Yugi in the contemporary sense is about the innate endowment that enables this transition that yes person talked about from finite data to the concept the structure or in the mind the concept the structure no is what we call the I language so should be clear that to deny the existence of this is not debatable it's senseless it doesn't exist language acquisition is magic there is a kind of coherent version of this common claim Tomasello many others a coherent version would be to claim that there is some general learning mechanism which has nothing specific to do with language or maybe some collection of cognitive capacities which integrates somehow to make it possible to achieve the properties of language but there's a couple of problems with these proposals one problem is simply that they're they reduced to hand-waving or if they're made it all explicit they're very quickly refuted a second problem is that you can expect in advance that they're not going to work for reasons that were discussed by Eric Klinenberg in his classic book on faculty of language back biology of language 50 years ago in which he pointed out he discussed the fact that there are doubled associations between language and other cognitive processes this work has since been extensive a greatly extended Susan Curtis as person who's done the most extensive work on this and in fact there are many examples of cognitive capacities intact but no language and conversely so it's pretty clear in advance that it's not going to work but it's nevertheless a widely held view so I think it doesn't really make sense as far as I can see to claim that there's some problem yuuji well go back to yes person in the position that I want to continue with yes person was the last representative of a very interesting tradition which goes back to the origin it's or originates back in the in the 17th century Scientific Revolution which set the course of modern science in a sharply new direction the the great thinkers of the 17th century Galileo's contemporaries others they simply refused to accept what what happens around them as being natural self-explanatory not requiring explanations they recognize that the phenomena of nature were puzzling mysterious and demanded explanation whether it was objects falling to the ground or perception of a triangle or anything else that willingness to be puzzled about phenomena was actually something pretty new that it had happened unto the Greeks it was kind of a dark ages that followed but it was revived by the 17th century thinkers and as soon as they began to look around them they found that everything was really puzzling on things that seemed obvious didn't require require explanation actually something similar happened in the 1950s if you go back to that period linguists generally assumed that everything was more or less understood and that there was nothing general that you could say about language famous characterization by theoretical linguist martin jos what he called the bosnian principal named after the great anthropological linguist franz boas is that languages can differ in arbitrary ways each one has to be studied on its own without preconceptions there's nothing to say about language except applying the procedures to a corpus and you could do that so essentially the field was terminal I was a student at that time and the general moved among students was this is fun but what do we do when we've applied the procedures to all the languages then it's over as soon as he began to undertake the to look at the phenomenon seriously to try to construct actual generative grammars which would work you found out that you didn't know it's not that you understood everything you understood one was nothing everything was a puzzle that didn't seem as if there'd be any termination to the field and that's what's happened since by now the has just exploded since and the kinds of problems that students are looking at today you couldn't have even formulated Lili little um dealt with not many years ago that's a enormous change well let's go back to the 17th century among the many phenomena that intrigued and puzzled the Galileo and his contemporaries that one actually was language so they were struck by the fact they expressed their an amazement that a quite remarkable fact that as they put it with just a few symbols a couple dozen symbols it's possible to express an infinite number of thoughts and to convey to others who have no access to our minds all of the workings of our minds and they asked how that magical accomplishment could be made like to quote the wrong words which puts it evocatively they were awed by the method by which we are able to express our thoughts the marvelous invention by which using twenty five or thirty sounds we can create the infinite variety of expressions which having nothing in themselves in common with what is passing in our minds not nonetheless permit us to express all our secrets allow us to understand what is not present the consciousness crucial point in effect everything we can conceive and the most diverse movements of our soul and if you stop to be willing to be puzzled it is a pretty amazing for the fact it's by no means seems natural yeah we do it all the time but it is quite amazing furthermore there's nothing similar to it in the organic world which they recognized which raises very crucial questions how did this unique human achievement come about and how can it be understood and explained well oh for Galileo the alphabet was he said the most pendous of all human inventions comparable to the achievements of Michelangelo ition and nothing like it the reason was first that it captured this amazing property and secondly because it allowed us to express all the wisdom of the ages and beyond that it included the answers to any question that we could pose all in this small collection of symbols it was kind of like what we would call these days a universal Turing machine the Port Royal grammar and logic which followed shortly after gave many serious insights into logic and linguistics became the basic logic text for many centuries it initiated a tradition and linguistics of what was called rational and universal grammar rational because it was seeking explanations not descriptions the universal because it was trying to find the principles that underlie all languages yes persons great principles that underlie the grammars of all languages well the traditional formulations are not precise but I think it's fair to interpret them as recognizing that the capacity for language as well as individual languages are internal properties of persons yes person says quite explicitly and it was also generally assumed without much evidence but as we know now quite reasonably that this capacity whatever it is is a human characteristic shared among all human groups there are no known group differences in this capacity and it's furthermore a unique to humans and all the central respects there's nothing and in the organic world so it's a true species property and as they recognized it's the foundation of human culture and human creativity well these ideas actually had a very substantial impact on philosophy and a general intellectual culture mainly through the influence of Descartes who adopted similar views roughly the same time Descartes famous dualistic approach the idea that in addition to the material world there's also a mental world was based very substantially on the recognition that this unique capacity of the ability to create an infinite number of thoughts is somehow unique to humans and it cannot be captured by machines the machines for early modern science Galileo through Newton and beyond meant the kinds of artifacts that were being created by skilled artisans and we're proliferating all over Europe very complicated artifacts they could do all sorts of intricate things and their approach to science is well that's what everything is it's called the mechanical philosophy philosophy of course meant science so mechanical Sciences that's real explanation the criterion for intelligibility for Galileo and his contemporaries was ability it was a cat was constructing or at least devising in principle a machine that could account for something if you could do that you had an intelligible theory but decart recognized quite correctly that this capacity this amazing capacity couldn't be captured in those terms so postulated a second substance very Koga tones thinking substance which would capture this capacity somehow and be linked to the material world that's Cartesian dualism well the Cartesian scientists took the stack this is all scientific program perfectly sound science based on correct observations about the limits of mechanical objects the Cartesian scientist took the natural next step especially jacques de coeur de ma leading Cartesian philosophers scientists court de chordoma designed experiments series of experiments to determine whether some other creature could exhibit the capacities that a human can exhibit this sounds sort of like the Turing test but with a crucial difference Turing was trying to find something that would simulate aspects of human behavior at the core de moi was pursuing a scientific project it's kind of like a litmus test for acidity it does some other entity some other object or organism have a particular property that's similar looking project but it's very different in character real science well the what happened to this to this these developments their faith is commonly misinterpreted soften believed that science as it developed got rid of what Gilbert Ryle called the ghost in the machine the second substance but what happened actually is the exact opposite what happened is that Isaac Newton much to his dismay exercised the machine but he left the ghost intact Newton showed that there are no machines that there's nothing the material world simply cannot be captured in mechanical terms because of the interaction without contact which is inconsistent with the mechanical philosophy Newton himself regarded this result as a complete absurdity which no one knows no one with any scientific understanding could contemplate he agreed with the other great scientists of his day Leibniz and Huygens others that this is utterly absurd but couldn't seem to find a way out of it so the end result is we have theories like Newton's that we can understand but no intelligible world what they describe is simply unintelligible that was understood and science just changed it stopped seeking intelligible accounts an intelligible world and just move to the weaker objective of finding intelligible theories of the world which is quite different on it quite unacceptable to early modern science it's a major shift in intellectual history and it was understood very well understood at the time so shortly after David Hume who regarded writing in his history of England Lee has a chapter on Newton greatest genius in history says well Newton seemed to draw the veil from some of the mysteries of nature he showed at the same time the imperfections of the mechanical philosophy and thereby restored nature's ultimate secrets to that obscurity in which they ever did and ever will remain and they do in fact remain and not obscurity science just stopped looking for them after some period John Locke shortly after Newton's great volume appeared the great treatise Principia Locke coward the information further in a highly consequential way he expressed that within the theological framework of the day but we can change the terms of pointers correct he argued that the incomparable mr. Newton as he called him had demonstrated that God had added to matter properties that are inconceivable to us specifically interaction without contact and so perhaps God had super added to matter the capacity of thought a property of certain kinds of organized matter that's thought that idea was pursued extensively through the 18th century into the early 19th century Darwin mentions it in his notebooks was then forgotten completely and it's been revived in recent years as what's called a radical new idea and philosophy of mind it's now a commonplace of the cognitive and brain sciences picking up a forgotten tradition that followed directly from Newton's demonstration that there are no machines that's a crucial part of intellectual history not too well understood and it's worth remembering that as human luck correctly recognized Newton had in fact left issues in mysteries and obscurity in which they remain that's quite an interesting question well let's put that aside and go back to the tradition of rational and universal grammar cultivating and culminating and yes person all of that was swept aside completely by the 20th century behavioral structuralist currents which typically in fact I think universally adopted the second approach I mentioned taking language to be something external to people which people somehow grasped the whole tradition was totally forgotten still unknown which is unfortunate I think there's a lot of wealth and richness there it was so forgotten that even yes person famous linguist of the early 20th century was gone there's an interesting article by a historian of linguistic Julia Falk who reviews this and points out that was just even the core the major linguist like Bloomfield and others just knew nothing about it essentially nothing well that general program that culminates say from roughly Galileo TS person that falls within the Natural Sciences it was revived in the with the generative Enterprise in the early 1950s it's called the bio linguistics program but it should be understood that this is only one current within the generative Enterprise much of the ongoing work within the generative Enterprise does not accept this internalist view but that's the one home keep - well the early efforts and the tradition run into plenty of difficulties empirical difficulties conceptual difficulties the empirical difficulties were there wasn't enough evidence what was understanding of language was pretty thin the conceptual problem was that there was no way of really understanding this notion of concept of structure in the mind that enables this achievement of expressing an infinite number of thoughts to be captured well all that was somehow there is they recognize there is what we may call the basic property of language reformulating it in our terms somehow this concept the structure in the mind is capable of generating an infinite array of structured expressions each of which captures a thought to the extent that we understand the notion thought and each of which can be externalized in some sensory motor modality typically sound but as we know now very well could be some other modality could be signed scientist virtually identical to speech even with some reservations could be touch so it's a modality is basically irrelevant a matter of some significance that we'll come back to well by the mid 20th century the conceptual problems had been overcome that's why the generative enterprise was able to take off and revive the tradition Turing mo post journal of course other great mathematicians had given a precise clear understanding of what became the theory of computability which allows us to understand very clearly how it can be that a finite object like the brain or your laptop for that matter can capture within it the basic property now that's well understood which means you can proceed with the enterprise that had elapsed with yes person well that you can deal with what sometimes I've called the Galilean challenge the original formulation of what the field I think ought to be about it's a persuasive definition again well if you want to meet the Galilean challenge there are several tasks first task is to try to discover the I languages for languages of the widest possible typological variety it's a huge task of course second having done to the extent you can do that you can turn to the next problems theoretical for the first one is to determine how a speaker of a language when he's producing a sentence how does the speaker select a particular expression from the infinite set that's generated by the I language the next question is how is that expression externalized in some sensory motor system a third question is the inverse of that for the hearer how is the expression processed mapped from something and say sound to an expression of the I language well the second and the third tasks are input-output problems kind of problems we know how to handle and in fact a great deal has been learned particularly about processing also about the externalization of the internal object generated how about the first test how does the speaker select a sentence from an expression from the infinite array generated by the I language that's another total mystery there's nothing to say about it it's that's in fact that's true of voluntary action generally of which this is an instance is basically nothing to say about it this fact is captured kind of fancifully if they put it by two of the leading neuroscientists who deal with voluntary action Emilio BTC robot a j-man they have a recent review of the state of the art in the field of voluntary action and what they say is we're beginning to understand the the puppet and the strings but we have absolutely nothing to say about the puppeteer we can't say anything about why one or another action is selected in particular that holds for the first task so there's another mystery that so far is beyond it doesn't even have bad ideas there's thing to say about it well the I language is clearly a property the individual by definition and the same is true of the Faculty of language although it's a shared property of humans with insignificant variations it's it's essentially a proper it is in fact the property of each individual and the Faculty of language faces two empirical conditions two crucial conditions have to be faced by any theory of the Faculty of language of this internal system a one as the problem of learnability second is the problem of evolvability so the Faculty of language has to be rich enough so that it can account for the properties of all languages and even more strikingly for the remarkable property of this enormous leap from finite data to the internal system which has character carried out by the Faculty of language has to be rich enough to overcome the very acute problem that's called the problem of poverty of stimulus often unappreciated so one demand on the Faculty of language just has to be rich enough to have to achieve these goals but it also has to be simple enough so that it could have evolved as to meet the evolvability condition and very more specifically to have evolved under the very specific conditions of evolution of language these two goals are least on the service at ascetical and rich you enrich it and you enrich the theory and you make the problem of evolve ability order and conversely and a lot of the field over the last years has been some kind of effort to overcome this apparent conflict so let me let me stop for a second and talk about the specific conditions under which language evolved which make the problem much harder and more striking in general not very little is known about the evolution of cognition it's a very hard topic to study one of the leading evolutionary biologists Richard Lewontin has a famous article in the for volume MIT invitation to cognitive science he wrote the article on evolution of cognition and his basic conclusion is I'm sorry you guys you're never gonna learn anything about it it just can't be handled by the techniques available to current science notice it's not that it's a mystery in the sense of the other things I imagine it's just that it's beyond the possibilities of research if you had say tape recordings from a hundred thousand years ago maybe you could learn something but we don't now we're never going to get them so his conclusion was this ugly nothing to say about it there's a lot to what he said and we're thinking about but I think it's a little too pessimistic there are a few things that have come to light and they're kind of suggestive one is that we know that modern humans aren't atomically modern humans as fossil plenty of fossil evidence that shows that they appear roughly two or three hundred thousand years ago essentially in that range it's now known that human groups which were very small at the time began to separate roughly two hundred thousand years ago and the groups that separated have the same Faculty of language as far as we know some people in Africa first group that separated well that tells us that the Faculty of language was already established not very long after modern humans appeared so it seems essentially something a property of modern humans as they appear notice that these periods of time are extremely small from the perspective of evolutionary time which doesn't deal with notions like tens of thousands of years so essentially we can say that the species characteristic appeared essentially along with modern humans a second fact known from the archaeological record is prior to the appearance of modern humans there doesn't seem to be any of any serious evidence of any kind of symbolic activity in the archaeological record and not long after the appearance of humans you start getting quite a rich record of complex symbolic activity the blombo's cave and south africa is the most famous example so there's more there's more there's further work on this by very fine linguist reading Hoiberg most of you know who discussed found that the he pointed out that the earliest separation roughly maybe our fifty thousand two hundred thousand years of the Sun people in Africa although they have of it as far as we know the same Faculty of language they have a somewhat different form of externalization as he showed these are all and only the languages that have complex click systems so few exceptions but I think he actually managed to show that they're meaningless so what that suggests meaning points out in his article is that the Faculty of language developed prior to the separation but external ization took place after the separation in slightly different ways not there's some minor physiological adaptation about click languages like change in the the palette but not not very much well that's all very suggestive if you put all this together what it strongly suggests is that whatever emerged along with modern humans and yielded the Faculty of language must have been very simple simple and such as something that nature would hit upon immediately as soon as some small rewiring of the brain made this task possible of satisfying the Galilean challenge that converges with developments that have been taking place within the generative enterprise with in quite a suggestive and important way well what it what what in order to fix it suppose that in fact something simple did develop along with modern humans yields the Faculty of language we would expect it to be very simple in structure and what about the person with dealing with including very simple modes of computation which would satisfy the evolvability condition for a genuine explanation then what remains to fix a language well an individual has to fix a language on the basis of data on the basis of simple data and it should be there must be some way to do this on the basis of very simple evidence the reason is that we know now from psycholinguistic studies that acquisition of the essentials of language has already been carried out very early in fact about as early as you can test two or three year-olds have enormous understanding of the fundamental principles of language own some examples of that and the evidence available to them is very limited I mean you maybe they've heard few million sentences but that gives you extremely little evidence that's been shown very well and careful statistical work by especially by Charles yang who pointed out and showed that when you look at the effect of what's called zips law you know the rank frequency distribution of words turns out that almost all the evidence that children are getting or just repetitions of very few things even by grams barely are repeated in millions of sentences trigrams almost never you know very rarely so the evidence is really very slim the knowledge that's acquired is very rich we conclude in general what we expect to find is a very simple Faculty of language and the actual acquisition of language should be based on some kind of capacity to pick out what's significant and important from quite impoverished data that's what you'd anticipate and more generally then we of course always would be look in any field would be looking for the simplest theory that's simply a general a general fact about explanation it's clear that as the foundations of a theory becomes simpler its explanatory depth increases so if science is interested in explanation not just simulation it'll be of course looking for simplest theory that there's a second reason for looking for the simplest theory which is a kind of a percept goes back to Galileo again who simply urged that we accept the idea that nature is simple and it's the task of the scientist to show it that's true from study of falling bodies to the flight of eagles to whatever that's of course just what's called a regulative principle a precept but it's one that's been spectacular successful in the sciences so it's simply taken for granted in the sciences and there's every reason for us to take it for granted too and thirdly for linguistics there's a third reason to expect a very simple theory of the Faculty of language namely the specific conditions under which this Faculty appears to have evolved well notice that it's often argued that evolution violates Galileo's precept evolution is what fox Washer called tinkering a bricolage which tries lots of different things ends up with very complex objects whatever one thinks of that it may not it doesn't seem to be but to apply to the special case of the acquisition of language simply because of the specific conditions under which it seems that language evolved well considerations like these arise very clearly in the development of the minimalist program to which I'll return but the point I want to emphasize here is that learnability and evolvability provide the conditions for genuine explanation that's the holy grail these are the conditions for meeting the Galilean challenge so genuine genuine explanation will of course be at the level of ug and it will be in a form that meets the demands of learnability and evolvability that's a very austere requirement anything short of that is not a genuine explanation anything short of that is a partial account maybe a useful one but not an excellent it's a way of presenting materials as a problem to be solved that's a very important endeavor it's much better to have some organized presentation of some carefully structured problem that's a great advance over just chaos of course so the it's by no means denigrating those achievements but we should not confuse them with genuine explanations well any device that's introduced in linguistic description any device to deal with some problem whatever the problem is has to be measured against these two conditions is it learn is it evolvable I think we're finally maybe in a position today to take the galilean challenge seriously which is true is quite important that's a new step so just to illustrate with a concrete example to which I'll return if there'll be time there's a very interesting paper by very good linguist Jaco Boscovich most of you know on the coordinate structure and the adjunct island constraints and in the paper he points out that each of these coordinate structure in Azure and island pose many problems many mysteries but his paper attempts and I think in a way succeeds and trying to show that these two collections of mysteries actually are the same mystery what he does is try to reduce the a giant island constraint and the coordinate structure constraint to a single a single mystery relying on a new David's own in event semantics which in fact treats adjuncts as coordinates so based on that idea you can two collections of mysterious phenomena put them together into one collection of mysterious phenomena which is a significant advance that leaves the mysteries but they're now more susceptible to successful inquiry and I think that virtually every achievement in the field it's pretty much like that that manages to overcome some reduce some collection of mysteries to a simpler and more manageable collection which is a major achievement but it's not genuine explanation so we're still searching for the Holy Grail at least all of this is the way things look within the bio linguistic program if you're pursuing a different enterprise there are different considerations the first proposals now I'm talking to linguists who know all this the first proposals back in the early and the 50s were basically dual there were two different problems that had to be faced one was the problem of compositionality how do you put structures together the other was the very puzzling property of dislocation expressions are heard in one position but they're interpreted both there and somewhere else so what did John see you interpret the WH phrase the wat as a quantifier arranging over the whole thing but you also interpreters the object of C where you don't pronounce it that's a ubiquitous property of language very complex cases has been studied over the years well the proposals back in the 50s were two different kinds of mechanisms a phrase structure grammar for compositionality transformational grammar for dislocation you look back at the proposals each of these systems was much too complex to meet either the conditions of learnability or vulnerability to that is to provide genuine explanations that was understood but was very unclear what to do about it it was generally assumed at the time that compositionality is a natural as something natural that we can kind of handle dislocation seems very strange you don't build dislocation into formal systems for example it's just something that seems unique to language and very weird property of language was considered what's called an imperfection of language somehow it adds for odd reasons this this complex notion that's still widely believed but I think it's exactly the opposite of the truth as research has progressed it turns out first of all that these two different apparently different properties can be unified and that the more primitive of them in fact is dislocation I'll come back to that but it seems that the most primitive operation is dislocation compositionality is considerably more complex although they can be unified into a single operation something all want to come back to well turning to a couple more comments first structure grammar was very quickly recognized by the 1960s within I'm talking within a particular current of the generative Enterprise others don't agree but within this current it was quickly recognized that phrase structure grammar czar completely unacceptable they're way too complex a phrase structure grammar for one thing allows totally impossible rules so there's nothing in a phrase structure grammar that says you can't have a rule say sentence becomes a preposition verb phrase or something or anything else you can imagine so it just allows a huge number of rules that are completely impossible so there's got to be something I'm playing it wrong with it also I think in retrospect we can now see that phrase structure grammar conflated three quite different notions one is the notion of just hierarchical structure a second is the notion of linear order a third is the notion of what was called projection how they decide whether some unit you formed is a such-and-such and over time it's been recognized that these are quite different properties well a step was taken by the late 60s to overcome at least some of these problems with the development of what was called x-bar theory I won't discuss it I assume you know what it is but x-bar theory did have a number of consequences which interestingly were not really understood very clearly at the time Tim will remember this for one thing x-bar theory keeps the structure has no order okay so you have the same x-bar theory in effect for say English and Japanese which are close to mirror images well the significance of that wasn't really entirely grasped what it tells you for the first place is you have to have a principles and parameters approach there can't be a rural system at least for compositionality took some years for that to kind of settle in but it's immediate once you look at x-bar theory so it takes say English and Japanese essentially the same Explorer theory but there has to be something distinguishing them something that says orders one way in one language the other way in another language but that's the principles and parameters approach furthermore if you look at that parameter you see that it doesn't affect the meaning of the sentence so whether you have a verb object or an object verb language the meanings are exactly the same the theta structure of the argument structure is the same that at once suggests that the parametric difference the linear order simply doesn't have anything to do with the core of language namely the construction of an infinite number of thoughts to put it in more technical terms it doesn't feed the conceptual intentional level doesn't yield semantic interpretation that's a an observation which has many consequences if you think it through well it's elaborated in later work in other ways but it's already a bit of a hint that somehow things like linear order and other aspects of externalization don't strictly speaking belong the language a lot of consequences though when you think it through all return to it well these are some of the consequences at once of looking at x-bar theory should have been recognized instantly gradually came to and be realized later on x-bar theory however does have problems I'll mention these and then put the rest off till later there's a fundamental inadequacy of the x-bar theory which was not recognized it still conflates projection and compositionality it does separate order but it still conflates those two and that runs a ground as soon as you look at X eccentric instructions which are unacceptable in x-bar theory rules out X eccentric instructions so you can't have say subject predicate or you can't have any movement because all movement yields X eccentric construction so if you have a wh movement it gives you a construction wh fairies CP to just two structures neither one is dominant subject-predicate if you accept say the predicate internal subject hypothesis domine sport ation Koopman so that you have a an NP nominal phrase and a verb phrase but they're just two parallel phrases well a lot of artificial devices were devised were constructed within x-bar theory to try to get around this and to give you what you intuitively know is the thing you're after but that's not allowed that's trickery oh so there was a fundamental problem with x-bar theory that finally was resolved it's not recent a couple of years ago by the development of labeling theory which finally separates totally this problem of projection from from compositionality separates all three tells you when some operation of dislocation must take place when it may take place when it needn't account need not take place so that finally suggests breaks up the conflation of the three notions that were mixed up in a very structured grammar a lot of interesting results plenty of interesting problems well that brings us up until about the 90s so why are my stuff there go on next time [Applause] we're going to have a question period but we want to have a two-minute break just to give people who don't want to stay for the question period a chance to get up and leave and then we'll move into questions and we have two microphones and two people holding the mics so when you're ready to ask a question they'll come and bring you the mic know if you want to sit you're happy okay if you're getting a sore throat I've got I think we're ready for questions now the floor is open question over there on this side hi Bardia Bashir at UCLA alumnus you mentioned the issue of the interaction at a distance or lack of actual contact which was a big issue with physics at some point in the past now that from my understanding fields like electric fields and magnetic fields the concept of field is being increasingly replaced by particles in other words bosons so everything is a force interaction particulate force interactions between matter so there's matter particles and force particles does that change in any way this notion of action at a distance does it clarify it in any way so he referred to your mentioning of action at a distance and he said that currently there's a trend in physics towards using particles rather than fields does this effect I mean there are a lot of the contemporary physicists have argued that the absurdity that Newton perceived has been overcome first by rather than the special relativity so reinterpretation of gravity in terms of curved space-time second approaches in terms of gravitons which you're talking about but this doesn't do any good because all of these notions were would Newton live Mnet's Huygens Galileo would have regarded as exactly as absurd as action at a distance they introduce other departures radical departures from the mechanical philosophy which was the criterion of intelligibility so yes they do overcome action at a distance but by presupposing other notions which are even further from intelligibility so it doesn't do any good newton Locke and Hume were really on to something I think I mean there's only sort of two ways you can look at this you can either assume they were all just stupid and we're smarter now but I think we can put that aside or you can assume that they it was really something or you can assume that we've learned learn something that they didn't know but I don't think that's correct what we've learned is other ways of dealing with the problem which don't deal with the mysteries but do solve the problems theoretically in a superior way a question over here fourth row so how soon do you expect to see artificial intelligence and like machine learning reaching a point to our Virtual Assistants such as like Siri and Alexa are capable of more like rich language processes that may involve sort of a natural construction and even become more self-aware per se and like their own speech community so will robots take over that is basically could you go a little slower hmm the person could also yeah actually if you you can eliminate the translator well I'll Rio so how soon do you expect to see artificial intelligence such as you know machine learning reaching a point to where virtual assistants such as like Siri Alexa and Cortana are capable of much more like rich language processes that may involve more of a natural construction of language maybe more self-aware and build their own speech community so like interactions between AI how oh yeah so the question is given developments in artificial intelligence how soon do you expect things like Alexa and Siri to reach a point where they can actually have realistic properties of human language and maybe even develop spontaneous speech communities well I think they're this is a question that is seriously discussed so one of the very few serious scholars of 17th century science and philosophy and one of the people who really understands what Descartes was doing John Cottingham has suggested that the current approaches to say the Turing test in terms of deep learning other recurrent neural networks you know other devices like the serial XO thing will actually could actually reach the point where they could pass the Turing test okay and he says okay that's all state courts problem but it wouldn't have solved it for Descartes or for it or I think for any 17th century scientists because they were not they would not have they might have been amused but they would not have been impressed by simulation of behavior they were trying to understand it that's a crucial difference to say okay I've got something that simulates what people do okay kind of amusing but it tells us nothing we're interested in how it's actually done not something else that kind of looks like it okay it's kind of as if you imagined a form of physics which said let's forget all about physical theory we don't need all that nonsense too complicated let's just take you know billions of the videotapes of say leaves blowing in the wind and we'll find patterns there may be patterns that humans don't even see because it's an R and R you can find all kind of other patterns and that'll be able to predict very well what the leaves will be doing next time we look at them well how would a physicists react to that well kind of amusing but tells us absolutely nothing we're not interested in fact suppose the person who advocated this actually did come from Silicon Valley and he said this is better than physics as they do because this can actually predict what is happening next where physicists can tell you absolutely nothing about how the leaves are going to blow in the wind physics can't begin to not interested in anything about it well that's the difference between simulation or understanding for the scientist what's important is to try to figure out the principles by which this is exactly happening not just simulate somehow what's going on with you know lots and lots of analyses a statistical analysis of phenomena of the world for linguistics is a very important issue because a lot of the work on language these days is of the simulation variety you read articles and cognitive science journals saying we can do a really good job of matching the acceptable sentences in a corpus by say deep learning methods with looking at millions of examples maybe you can maybe you can't but it's of zero interest intellectually I mean it's been maybe of some engineering utility so like you know Lexa's helpful the Google translators you know it's worth having but it's it's it has only nothing to do with science which is interested in understanding nuts not simulating in fact if you take a look at things like say corpus linguistics or Silicon Valley linguistics you know simulating the acceptable sentences in the Wall Street Journal corpus and think of that as the transfer that over to the context of scientific inquiry so takes a matching acceptable sentences suppose we have the Wall Street Journal corpus and let's say those are acceptable sentences you can you can think of each of these sentences as an experiment a random experiment saying here's the result of an experiment this sentence is acceptable in fact you could run it as an experiment you could show it to group of subjects and they'd say is acceptable well suppose you can match in physics a huge number tens of millions of experiments that are randomly selected so absolutely no interest doesn't make any difference what you're interested in those exotic experiments that yield something significant okay maybe experiments you can't even carry out like what would happen to a ball rolling down a frictionless plane you can't do it but that's the kind of experiment that matters okay and that's what all science has been about for hundreds of years I mean the idea that we're kind of regressing to the period before the Scientific Revolution when you know it might have been considered acceptable to just match a lot of the phenomena of the world that's a very surprising striking fact tells us something very odd about the current intellectual culture okay I think it's really worth thinking about that's very significant for those of you who are students because that's where the jobs are unfortunately the money and the jobs so you mentioned a similarity to or the difference between simulation and explaining something but with something as spontaneous and flexible as human language which doesn't have a lot of the same restrictions that and like something like physics has wouldn't you say that it would be impossible to really replicate the way that humans speak without without in some way understanding it like if you were to program if you were able to program a machine to truly speak like a human rather than simply repeat pre-programmed sentences you would have to understand the processes that allow those multiple infinite recursive sentences to be created because with human language there's an infinite number of things you can't say in an infinite number of ways and arguably if you were going to have a machine be able to do that you would need to understand it first so the attempting to program that is at the same time attempting to understand the processes that go on within the human brain that allow us to create that right the question said in order to do a really credible imitation surely the person designing the imitating machine would have to fundamentally understand how it works with humans in order to create something that is a plausible imitation and what do you maybe I haven't gotten early right but dressing the the fact that human language can be like infinitely recursive and spontaneous and a machine that was able to use that aspect of language and create new thoughts and sentences and ideas the way that humans can rather than simply repeat so so in order to really simulate effectively you would have to understand what's being done I think so too but that's the opposite of the guiding mentality in the study of deep learning and so on their their approach is just simulated and if you can simulate you finished in fact some of the more extreme advocates you can read it in journals like Wired magazine if you're interested in that kind of thing is that we can really get rid of the sciences the sciences are just a rough approximation to reality reality is the description of phenomenon okay I basically agree with what you're saying that is an interesting book coming out by Gary Marcus is a sympathetic but and knowledgeable critic of which essentially makes your point argues that a is going to a dead end because its goal is simply simulation instead of understanding now from an engineering point of view that really doesn't matter I mean if it works okay it works if the Google Translator enables you to understand the article in Greek good you know which is fine there's nothing wrong with bulldozers and so on everybody likes them but but it's just not science and it you're probably right that it will hit a dead end in fact probably he's already hitting a dead ends there's some research underway which will probably show that when it's finished that the there are deep reasons while these systems can't capture basic properties of language it's a little hard to prove because if you look at the system say recurrent neural networks the the they're very opaque it's very hard to figure out what's going on inside them so therefore try to prove anything about them as a hard task now from the point of view of the advocates of the system just doesn't matter because you're interested in simulation anyway but if you really want to answer the kind of questions you're raising in a fundamental way it'll be necessary to explore the internal workings of the systems and try to determine what properties they have and what properties they don't have so to take a concrete example take say the Google parser okay if you read the propaganda from Google you know they're the guys who run the research programs they tell you straight out literally that the problem of parsing has been solved we can turn to other problems the reason is the Google parser works for 95% of the sentences in the Wall Street Journal corpus let's say okay from the point of view of a scientist that tells you is zero you look at those five percent that it can't deal with they're mostly the critical experiments if you bring up other questions like how do you handle power sit at gaps the answer is well who cares they never occur anyway you know which is true they don't but but just as in physics those are the things that tell you something you know okay people should really look at the early history of science it's instructive so you go back to the 17th century there's a question bothered yellow and others I suppose you have a sailboat going through the ocean and you have a mass let's say small you know some kind of solid mass at the top of the mast of the silver when the so if is it going off and you let it fall is it gonna fall to the base of the mast or behind the mast well the belief at the time is it'll fall behind the mast because the sailboat is moving along that it'll fall behind well suppose you were to experiment with this with sailboats and masses on the top of the mast you'd find some you know chaos of points all over the place where the ball falls and maybe if you didn't ten billion of these you can get them conclude well maybe it mostly falls at the base of the mask and there's a scattering around that but that's not the way they solved the problem they solved the problem basically with thought experiments and by assuming abstractions that you just can't find in the physical world like a sailboat moving without any perturbation of wind and current or anything else and and then when you think through the problems you see it's got a fold of the base what's the mass of the tops accelerating okay but that's the kind of problem that you which which yields I mean that's true all of science up to the present you know say the double slit experiment the famous experiment rivers cat nobody's ever carried it out probably couldn't carry it on but it's had an enormous impact on development of quantum theory you know right to the present that's how things work in the sciences not by simulation of phenomena and you're probably right that you're not going to get really good simulation unless you really understand the the problems the deeper problems but it won't matter from the engineering point of view like if you don't get parasitic gaps who cares they never happened anyway yeah another question from our first questioner do you think the one or series of mutations that resulted in human beings acquiring the language capacity do you think any of those mutations can ever be identified with respect to other species so that we can know exactly what happened in our brains do you think the one or several genetic mutations that made it possible for humans to acquire language will ever be identified presumably at the genetic level and compared with other species so that we can understand exactly how humans diverged are you by any chance a biologist okay if you look at just standard texts on developmental biology they point out that the the task of determining the genetic basis for a simple trait is what is called fiendishly difficult for simple traits like having blue eyes let's say it's just fiendishly difficult to try to figure out what the how however many genes are involved interact to yield this property when you get to something like language it's extremely difficult so yes in principle it's it's a it's a task that could be invested can be investigated but you have to be very careful about it and there's a lot of very misleading things in the literature even the sort of you know general scientific literature like journals like science or others so for example there's uh there was a fad for a while still to some extent claiming that a particular gene fox p2 is sort of the language gene you know it's it's somehow critical for language that couldn't be the case there isn't gonna be a language gene and it's by now been reasonably well established that Fox p2 well it's its effect is real it probably has to do with fine motor actions so it's so it does affect articulation which probably has nothing almost nothing to do with language for reasons I mentioned but yes it is conceivable and print there is there should be an answer to your question what kind of genetic changes led to the mutation that yielded something like the basic property but that's a pretty remote task any further questions yes over here I was wondering so you mentioned that one reason one of the reasons for this shift from away from phrase structure grammar was that they allowed crazy rules I think you gave an example of a noun phrase consisting of whatever but then you also mentioned that one of the more recent reasons for rejecting moving away from x-bar theory was that it couldn't handle non endo centric structures and so is that to say that there was something right about phrase structure grammar that we overshot by going to x-bar theory so he said you began by saying that I'll speak loud enough so that if I miss state what you're saying you can correct me he said that you criticized phrase structure grammar of the old variety because they were capable of producing crazy rules but on the other hand you've argued that the endo centricity requirement of x-bar theory is too strong because there are in fact EXO centric constructions that don't adhere to it and so Tim's question is does that imply that that was something right about for a structure the old phrase structure grammar in the first place it implies that phrase structure grammar were correct in permitting excess entry constructions but the trouble is they just permitted a huge mass of other junk which we know which we don't want so that means that the task is to find an approach to compositionality which excludes all the crazy stuff but does include extra centric instructions that's the task further question because they behind you need to learn ability but in your order you say we can't really trust because english and japanese are symmetric as x-bar theory tells you so my question is is it really true that japanese and english are symmetric and maybe you know that's something that we should maybe move away from well English and Japanese aren't entirely mirror images of course it's just sort of fundamentally they're mirror images there's other properties in which is the same some in which they're quite different but basically it's pretty much the case that the constructions and Jeff in English that our head compliment or compliment heads in Japanese but there are many other properties of linear order English and Japanese is just a simple case so if you go back say at a time when Tim was a student 1970s it was pretty widely believed that there's one other really most outstanding linguists of the modern period Ken hailed him studied with his intuition is almost a criterion for what's true and what's false it was did believe at the time that there was a a parameter that distinguished the flat structure from hierarchic languages and flat structure was assumed not only for Japanese for you know wal burry but for Japanese for German and others it did look as if there were just flat structure languages no hierarchical structure and we should get order the words freely and some of them the extreme case was the one that Ken was working on mainly well burry it just looked as if the words could be anywhere so that looked like a real difference among languages but the more that was learned the more it was realized that that's just superficial that in fact the languages have essentially the same hierarchical structure at a deeper level at one of Ken's students Julie leg aid now teachers at Penn discovered and worked with Ken that wall buried the extreme case actually had the same hierarchic structures as you know the languages we're familiar with when you looked at the deeper properties like an opera and things like that so it there are many different kinds of linear orders but what seems increasingly to be the case is that these are all properties of how the internal core language the language that captures the galilean property expressing an infinite number of thoughts that that language may even be uniform it certainly doesn't seem to very much the real variation is just in how you get it out to the sensory motor system and I'll talk about this neck but if you think about it for a minute what if the sensorimotor systems think about it from an evolutionary point of view a human language apparently emerge roughly along with humans so a couple hundred thousand years ago the sensorimotor systems were around forever you know millions of years before that they have nothing to do with language okay we're whatever is going on in the head you're sort of forced if you want to externalize it you have to use one of the sensory motor systems which have nothing to do with language so that's a complicated problem how do you take two systems which have nothing to do with one another and interrelate them so that a lot of different ways of doing it should be a pretty complicated problem should vary all over the place and in fact if we generally look at what we know about languages it increasingly seems to be the case so I'll come back to this that the variety of languages and the complexity of language and the the task of learning on the immutability of language the change from generation to generation it seems to be localized in the externalization systems furthermore the externalization systems are not strictly speaking part of language they are part of an amalgam of language with some other system which has nothing to do with language that has the interesting consequence that virtually all the work that's been done on language for the last 2,500 years it's not about language it's about the way in which language connects with a system that has nothing do with language now that system has its own principles it's not just anything goes you know so there's a great deal to say about the way in which externalization takes place differently incidentally of course if you're using a sound and sign if you're using or if you're using touch third possibility but whatever it is that's a very important topic but it's not strictly speaking the study of language and when we go back to linear order the point you raise there's many different possibilities English and Japanese are kind of interesting because they've both been studied intensively and they do roughly look like mirror images but there are many many other options so take a look say at Mark Baker's worked on pollo synthetic languages here again the order of the words again seems to vary all over the old map but the internal structure of these complex words that are at the core of the system they which just have features inside them they have the hierarchical structure it's the kind of thing that seems to be coming to light over and over I just want to pick up on hilda's question because I think she might have had a particular idea in mind so I think Hilda might have been interested in getting your comments on Cain's antisymmetry on Cain's antisymmetry just so people know what we're talking about the Cain's idea was that if you compared to languages like say Japanese which has her object verb order and English which has verb object order he claimed that they necessarily could not be structurally parallel and that one had to have a more complex structure than the other or they're both derived from some other he did but and Richard Cain's work is extremely interesting actually there's there are challenges serious challenges at this time I'll come back to them to the idea that linear order is irrelevant to language very serious challenges one is Cain's other work as say Norman Richard's work on contiguity theory so there's plenty of challenges to going to Cannes work you can't I think it's very important but I just don't think it's credible and exactly the form in which he gave it it may turn out the in fact looks not unlikely that if you take the surface form of expressions generated by the syntax then you can largely predict the linear order on the basis of the hierarchy but he wants something much more fundamental than that and that more fundamental part I think is very hard to establish so to get his analyses his analysis of Japanese is very intriguing like one of the things he showed is that if on the basis of his approach that certain gaps that are that you might expect to be filled don't don't appear that's an interesting result on the other hand the way he gets it is by postulating rules a variety of rules which have no motivation whatsoever other than that they give you the result you want and we would like to have rules that have some significance it's kind of interesting you guys'll fill in the details but Richie Kane argued that languages are basically subject-verb-object and shortly after a Japanese linguist that was Tanaka who was it some japanese linguist came out with a study saying yeah the basic idea is right but all languages are subject-object-verb and it's very tricky to get any of these things to work without postulating a wide variety of rules which just have no motivation then you can sort of make it work but Hilda's saying that I think she's not ready to give up yet but she's saying she's saying we'll get back to this later in the I don't think one should give up I think it's real issue it seems to we have the I'll come back to it again but we have very strong evidence I think that linear order is not part of language on the other hand we have very significant challenges to that the Kanes work is one I think Richards work is another but so that's you know kind of nice problem you like to have in the sciences yeah one last question over at the so I'm not a linguist at all but do you mind commenting on perhaps the the uniqueness of language as a human characteristic how we have these language acquisition devices is there a possibility that other animals organisms may possess a communication acquisition device or something of the sort so the questioner is interested in the extent to which you see say a lack of parallels between or the existence of parallels between humans and other species and if other species don't have language of the form humans have do they perhaps have something like a communication acquisition device or whatever it is they have every organism we know down to bacteria has some kind of communication system but I think language is not fundamentally a communication system it's used for communication but that doesn't tell you that it's basic structure and design or evolution have anything to do with communication and I think the answer is it probably doesn't I'll come back to it language just evolves some other way like humans have lots of communication systems so gesture for examples communication system a style of clothes everything we do is some kind of communication system and language is a very effective communication system but it just seems unrelated structurally and from an evolutionary point of view to the communication systems of languages it's very striking if you look I probably won't have time to talk about this but Communications is a symbol a company a computational system any computational system will have a system of rules and a set of kind of atoms you know the elementary elements that enter into the computation the atoms for human language are sort of concepts you know kind of word like but not exactly words and if you take a look at the elementary elements of human language the simple words they have properties that no no animal system has they're just totally different whatever that's another one of those mysteries where they came from but it just radically different from our own systems it just seems like a unique there's a lot of efforts to try to find analogs and even homologous systems but it's the only place that where you can find it really is in externalization so say birds songbirds have systems which have some of the properties interesting similarities with the sound systems of humans which is probably convergent evolution because there are I don't know 60 million years apart or something but there's something but but there's nothing that you can find that suggests any kind of continuity the belief that there must be continuity is very widely held and it's part of a kind of a misunderstanding of evolutionary theory it's a kind of a version of new Darwinism that suggests that any any properties of an organism must have developed a step-by-step with very small changes actually Darwin the suggest this and it was the belief was the core of the modern new evolutionary synthesis Fisher and others but it's been disproven in the last couple of decades there are many examples of radical sudden changes and it seems that human language is just one of them actually I should say that if there were any analogous any serious analogies analogous systems and other species then the task of neurolinguistics would be much simpler so for example we know a great deal about the human visual system but that we don't know it by experimentation with humans which is not permitted we know it by experimentation with cats and monkeys which have about the same visual system so that enables you to find out lots of things about the way the human visual system works and if there was any species with anything like the language faculty you know you could argue about the ethics of it but under current ethical assumptions you would be able to do inves invasive occurred and controlled experiments with them like raising them in controlled environments or sticking you know electrodes into their the brain that sort of thing but since there's just no other organism that anything like this you can't do it could I just do a quick follow-up to his question and that is what about the possibility that there could be other species who have much of the say the the part of language that is specifically associated with the conceptual intentional system but utterly lacking in connection to sensory motor systems so no possibility of externalization and no possibility of communication but perhaps parallels in terms of representation of knowledge of the world internally or something like that I mean there's a if you think back about the study by rini Hybris that i mentioned earlier about the what--he's postulating in fact is that there was an organism like that named the early modern humans who had the Faculty of language but hadn't yet developed the externalization system that's the logic of his very interesting paper the truck trying to explain the striking fact that the first group of humans to separate from the rest did develop a strikingly different externalization system which does strongly suggest something like what you're describing or what Tim is describing but there's no known organism that has anything like it and it would be pretty hard to find out if there was it suppose something suppose some Apes had that property how would we find out okay I think we've had a good two hours so we should let you have a rest [Applause]