Jonathan Berant   -   יהונתן ברנט

[ Contact | Teaching | Publications | Talks | Resources ]

I am a post-doc at the Stanford Natural Language Processing group, and a recipient of the 2012 Rothschild fellowship. I have completed my PhD at The Blavatnik School of Computer Science at Tel-Aviv University, working at Bar-Ilan university's NLP lab. The title of my thesis is Global Learning of Textual Entailment Graphs. I am mainly interested in automatically extracting semantic structure for tasks such as Paraphrasing, Semantic Parsing, Textual Entailment, Relation Extraction and Question Answering.

I have released a new state-of-the-art resource containing millions of entailment rules between predicates. You can find explanations and download the resource here.

SEMPRE
Check out our open source project SEMPRE, and build your own semantic parser!
Semantic parsing workshop
We are organizing a semantic parsing workshop in ACL 2014. Check out the details at our website.
Contact me
my email
Teaching

Computational Models, spring 2010

Computational Models, fall 2009

Computational Models, spring 2009

Computer Science in the Community, spring 2009

Computational Models, fall 2008

Computational Models, spring 2008

Publications
New    Jonathan Berant, Percy Liang Semantic Parsing via Paraphrasing Long paper in ACL 2014.    PDF  Bib   project page
A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question-answer pairs. Our system PARASEMPRE improves state-of-the-art accuracies on two recently released question-answering datasets.

Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang Semantic Parsing on Freebase from Question-Answer Pairs Long paper in EMNLP 2013.    PDF  Bib   project page
In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.

Aju Thalappillil Scaria*, Jonathan Berant*, Mengqiu Wang, Peter Clark, Justin Lewis, Brittany Harding and Christopher D. Manning Learning Biological Processes with Global Constraints Long paper in EMNLP 2013.    PDF  Bib   project page
* equal contribution Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) -- specifically ``How?" and ``Why?" questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.

Oren Melamud, Jonathan Berant, Ido Dagan, Jacob Goldberger and Idan Szpektor A Two Level Model for Context Sensitive Inference Rules Long paper in ACL 2013. (best paper runner-up)    PDF  Bib  
Automatic acquisition of inference rules for predicates has been commonly addressed by computing distributional similarity between vectors of argument words, operating at the word space level. A recent line of work, which addresses context sensitivity of rules, represented contexts in a latent topic space and computed similarity over topic vectors. We propose a novel two-level model, which computes similarities between word-level vectors that are biased by topic-level context representations. Evaluations on a naturally-distributed dataset show that our model significantly outperforms prior word-level and topic-level models. We also release a first context-sensitive inference rule set.

Hila Weisman, Jonathan Berant, Idan Szpektor and Ido Dagan Learning Verb Inference Rules from Linguistically-Motivated Evidence Long paper in EMNLP 2012.    PDF  Bib  
Learning inference relations between verbs is at the heart of many semantic applications. However, most prior work on learning such rules focused on a rather narrow set of information sources: mainly distributional similarity, and to a lesser extent manually constructed verb co-occurrence patterns. In this paper, we claim that it is imperative to utilize information from various textual scopes: verb co-occurrence within a sentence, verb cooccurrence within a document, as well as overall corpus statistics. To this end, we propose a much richer novel set of linguistically motivated cues for detecting entailment between verbs and combine them as features in a supervised classification framework. We empirically demonstrate that our model significantly outperforms previous methods and that information from each textual scope contributes to the verb entailment learning task.

Jonathan Berant, Ido Dagan, Meni Adler and Jacob Goldberger Efficient Tree-based Approximation for Entailment Graph Learning Long paper in ACL 2012.    PDF  Bib  
Learning entailment rules is fundamental in many semantic-inference applications and has been an active field of research in recent years. In this paper we address the problem of learning transitive graphs that describe entailment rules between predicates (termed entailment graphs). We first identify that entailment graphs exhibit a “tree-like” property and are very similar to a novel type of graph termed forest-reducible graph. We utilize this property to develop an iterative efficient approximation algorithm for learning the graph edges, where each iteration takes linear time. We compare our approximation algorithm to a recently-proposed state-of-the-art exact algorithm and show that it is more efficient and scalable both theoretically and empirically, while its output quality is close to that given by the optimal solution of the exact algorithm.

Naomi Zeichner, Jonathan Berant and Ido Dagan Crowdsourcing Inference-Rule Evaluation Short paper in ACL 2012.    PDF  Bib  
The importance of inference rules to semantic applications has long been recognized and extensive work has been carried out to automatically acquire inference-rule resources. However, evaluating such resources has turned out to be a non-trivial task, slowing progress in the field. In this paper, we suggest a framework for evaluating inference-rule resources. Our framework simplifies a previously proposed “instance-based evaluation” method that involved substantial annotator training, making it suitable for crowdsourcing. We show that our method produces a large amount of annotations with high inter-annotator agreement for a low cost at a short period of time, without requiring training expert annotators.

Meni Adler, Jonathan Berant and Ido Dagan Entailment-based Text Exploration with Application to the Health-care Domain Demo paper in ACL 2012.    PDF  Bib  
We present a novel text exploration model, which extends the scope of state-of-the-art technologies by moving from standard concept- based exploration to statement-based exploration. The proposed scheme utilizes the textual entailment relation between statements as the basis of the exploration process. A user of our system can explore the result space of a query by drilling down/up from one statement to another, according to entailment relations specified by an entailment graph and an optional concept taxonomy. As a prominent use case, we apply our exploration system and illustrate its benefit on the health-care domain. To the best of our knowledge this is the first implementation of an exploration system at the statement level that is based on the textual entailment relation.

Asher Stern, Amnon Lotan, Shachar Mirkin, Eyal Shnarch, Lili Kotlerman, Jonathan Berant and Ido Dagan Knowledge and Tree-Edits in Learnable Entailment Proofs. Proceedings of TAC 2011.    PDF  Bib  
This paper describes BIUTEE - Bar Ilan University Textual Entailment Engine. BIUTEE is a natural language inference system in which the hypothesis is proven by the text, based on linguistic- and world- knowledge resources, as well as syntactically motivated tree transformations. The main progress in BIUTEE in the last year is a new confidence model that estimates the validity of the proof found by BIUTEE.

Jonathan Berant, Ido Dagan and Jacob Goldberger Learning Entailment Relations by Global Graph Structure Optimization. Long paper in The Journal of Computational Linguistics 38(1) pp. 73-111 (2012)    PDF  Bib  
Identifying entailment relations between predicates is an important part of applied semantic inference. In this article we propose a global inference algorithm that learns such entailment rules. First, We define a graph structure over predicates that represents entailment relations as directed edges. Then, we use a global transitivity constraint on the graph to learn the optimal set of edges, formulating the optimization problem as an Integer Linear Program. The algorithm is applied in a setting where given a target concept, the algorithm learns on-the-fly all entailment rules between predicates that co-occur with this concept. Results show that our global algorithm improves performance over baseline algorithms by more than 10%.

Jonathan Berant, Ido Dagan and Jacob Goldberger Global Learning of Typed Entailment Rules. Long paper in the proceedings of ACL 2011 (best student paper)    PDF  Bib  
Extensive knowledge bases of entailment rules between predicates are crucial for applied semantic inference. In this paper we propose an algorithm that utilizes transitivity constraints to learn a globally-optimal set of entailment rules for typed predicates. We model the task as a graph learning problem and suggest methods that scale the algorithm to larger graphs. We apply the algorithm over a large data set of extracted predicate instances, from which a resource of typed entailment rules has been recently released (Schoenmackers et al., 2010). Our results show that using global transitivity information substantially improves performance over this resource and several baselines, and that our scaling methods allow us to increase the scope of global learning of entailment-rule graphs.

Catherine L. Caldwell-Harris, Jonathan Berant and Shimon Edelman Measuring Mental Entrenchment of Phrases with Perceptual Identification, Familiarity Ratings, and Corpus Frequency Statistics. To appear in S. T. Gries and D. Divjak (eds.), Frequency effects in cognitive linguistics (Vol. 1): Statistical effects in learnability, processing and change, The Hague, The Netherlands: De Gruyter Mouton (2011).    PDF 

Asher Stern, Eyal Shnarch, Amnon Lotan, Shachar Mirkin, Lili Kotlerman, Naomi Zeichner, Jonathan Berant and Ido Dagan Rule Chaining and Approximate Match in Textual Inference.Text Analysis Conference 2010 (RTE-6)   PDF 
This paper describes the participation of Bar-Ilan university in the sixth RTE challenge. Our textual-entailment engine, BiuTee , was enhanced with new components that introduce chaining of lexical-entailment rules, and tackle the problem of approximately matching the text and the hypothesis after all available knowledge of entailment rules was utilized. We have also re-engineered our system aiming at an open-source open architecture. BiuTee's performance is better than the median of all-submissions, and outperforms significantly an IR-oriented baseline.

Shachar Mirkin, Jonathan Berant, Ido Dagan and Eyal Shnarch Recognising Entailment within Discourse. Proceedings of COLING, 2010.   PDF  Bib  
Texts are commonly interpreted based on the entire discourse in which they are situated. Discourse processing has been shown useful for inference-based application; yet, most systems for textual entailment - a popular paradigm for applied inference - have only addressed discourse considerations via off-the-shelf coreference resolvers. In this paper we explore various discourse aspects in entailment inference, suggest initial solutions for them and investigate their impact on entailment performance. Our experiments suggest that discourse provides useful information, which signi?cantly improves entailment inference, and should be better addressed by future entailment systems.

Jonathan Berant, Ido Dagan and Jacob Goldberger Global Learning of Focused Entailment Graphs. Long paper in the proceedings of ACL, 2010.   PDF  Bib   We propose a global algorithm for learning entailment relations between predicates. We define a graph structure over predicates that represents entailment relations as directed edges, and use a global transitivity constraint on the graph to learn the optimal set of edges, by formulating the optimization problem as an Integer Linear Program. We motivate this graph with an application that provides a hierarchical summary for a set of propositions that focus on a target concept, and show that our global algorithm improves performance by more than 10% over baseline algorithms.

Shachar Mirkin, Roy Bar-Haim, Jonathan Berant, Ido Dagan, Eyal Shnarch, Asher Stern and Idan Szpektor Addressing Discourse and Document Structure in the RTE Search Task. Proceedings of TAC, 2009. PDF  Bib  
This paper describes Bar-Ilan University's submissions to RTE-5. This year we focused on the Search pilot, enhancing our entailment system to address two main issues introduced by this new setting: scalability and, primarily, document-level discourse. Our system achieved the highest score on the Search task amongst participating groups, and proposes first steps towards addressing this challenging setting.

Roy Bar-Haim, Jonathan Berant and Ido Dagan A Compact Forest for Scalable Inference over Entailment and Paraphrase Rules. Proceedings of EMNLP, 2009. PDF  Bib  
A large body of recent research has been investigating the acquisition and application of applied inference knowledge. Such knowledge may be typically captured as entailment rules, applied over syntactic representations. Efficient inference with such knowledge then becomes a fundamental problem. Starting out from a formalism for entailment-rule application we present a novel packed data-structure and a corresponding algorithm for its scalable implementation. We proved the validity of the new algorithm and established its efficiency analytically and empirically.

Roy Bar-Haim, Jonathan Berant, Ido Dagan, Iddo Greental, Shachar Mirkin, Eyal Shnarch and Idan Szpektor Efficient Semantic Deduction and Approximate Matching over Compact Parse Forests. Proceedings of TAC, 2008. PDF  Bib  
Semantic inference is often modeled as application of entailment rules, which specify generation of entailed sentences from a source sentence. Efficient generation and representation of entailed consequents is a fundamental problem common to such inference methods. We present a new data structure, termed compact forest, which allows efficient generation and representation of entailed consequents, each represented as a parse tree. Rule-based inference is complemented with a new approximate matching measure inspired by tree kernels, which is computed efficiently over compact forests. Our system also makes use of novel large-scale entailment rule bases, derived fromWikipedia as well as from information about predicates and their argument mapping, gathered from available lexicons and complemented by unsupervised learning.

Jonathan Berant, Catherine Caldwell-Harris and Shimon Edelman Tracks in the Mind: Differential Entrenchment of Common and Rare Liturgical and Everyday Multiword Phrases in Religious and Secular Hebrew Speakers . Proceedings of CogSci, 2008. PDF  Bib  
We tested the hypothesis that more frequent exposure to multiword phrases results in deeper entrenchment of their representations, by examining the performance of subjects of different religiosity in the recognition of briefly presented liturgical and secular phrases drawn from several frequency classes. Three of the sources were prayer texts that religious Jews are required to recite on a daily, weekly, and annual basis, respectively; two others were common and rare expressions encountered in the general secular Israeli culture. As expected, linear dependence of recognition score on frequency was found for the religious subjects (being most pronounced for men, who are usually more observant than women); both religious and secular subjects performed better on common than on rare general culture items. Our results support the notion of graded entrenchment introduced by Langacker and shared by several cognitive linguistic theories of language comprehension and production.

Jonathan Berant, Yaron Gross, Matan Mussel, Ben Sandbank, Eytan Ruppin and Shimon Edelman Boosting Unsupervised Grammar Induction by Splitting Complex Sentences on Function Words. Proceedings of BUCLD, 2007. 
Talks

Semantic Parsing on Freebase from Question-Answer pairs, Google, October 2013. 

Global Learning of Textual Entailment Graphs, thesis public lecture, Tel-Aviv University, August 2012. PDF 

Global Learning of Entailment Graphs, NYU, Columbia, MIT and UIUC seminars, January 2011. PDF 

Global Learning of Focused Entailment Graphs, University of Washington AI seminar, Seattle, October 2010. PDF 

An Entailment-based Ontology for Domain-Specific Relations, ITCH workshop, Trento, September 2009. PDF 

Standard and Non-standard Parse Trees Equally Improve Grammar Induction, ISCOL, Ramat-Gan, September 2008. PDF 

Short presentation about the argument from the poverty of the stimulus. PDF 

Resources
  • A state-of-the-art resource of entailment rules between natural language predicates, containing millions of entailment rules. The resource contains a knowledge-base obtained by applying the global algorithm presented in the ACL 2012 paper "Efficient Tree-based Approximation for Entailment Graph Learning", and also a knowledge-base constructed using a local entailment classifier described in my thesis (to be published soon).  
  • All data required for performing the experiment described in Section 5.1 of the paper: "Global Learning of Typed Entailment Rules" RAR 
  • A resource of 30,000 entailment rules between typed predicates, as described in the paper: "Global Learning of Typed Entailment Rules" ZIP 
  • Gold standard healthcare graphs described in "Global Learning of Focused Entailment Graphs", ACL 2010, and "Learning Entailment Relations by Global Graph Structure Optimization", CL 38(1). RAR 
  • A Java implementation of Klein and Manning's (2002) unsupervised CCM parser. Note that this is my implementation and is not endorsed or associated with the authors. ZIP 
  • Academic Activities
  • JACM reviewer, 2011
  • JAIR reviewer, 2011
  • TIST reviewer, 2011
  • Semantics program committee, EMNLP 2011
  • Semantics program committee, EACL 2012
  • Program committee, *SEM 2012
  • Program committee, IWCS 2013
  • Program committee, NAACL 2013
  • Program committee, NAACL 2013 student workshop
  • Program committee, EMNLP 2013
  • Program committee, EACL 2013

  • Other

    Since September 2012 I am a Rothschild fellow.

    Since September 2007 I am a member of the Azrieli Fellows Program

    Starting September 2010 I will be an IBM PhD fellow

    My labmates and I have won the 2009 RTE-Search challenge.



    The format of this home page was copied from Shachar Mirkin