Artificial Intelligence: Retrospective/Prospective Nachum Dershowitz Tel-Aviv University ================== The relationship between human and machine has been profoundly affected by the astounding growth of the Internet, engendered by the creation of the browser interface in the early nineties. The resultant web of interlinked files is already fulfilling one goal of artificial intelligence research, namely, providing the mind (for the time being, via computer screen or hand-held cellular instrument) with instantaneous access to enormous and rapidly growing repositories of data. Thus, the dream of artificially intelligent devices laboring for our corporeal convenience--household robots vacuuming our living rooms and industrial robots repairing our cars--has been superceded by a vision of an information-hungry society in which brave, new intelligent software dramatically enhances the cerebral and recreational aspects of human pursuits in an ever-shrinking wired world. These developments have had obvious practical ramifications for many traditional themes of artificial intelligence, such as natural language understanding and mechanical translation, and, at the same time, have greatly increased the immediacy of some nascent research topics, including speech comprehension and intelligent agents. The vast digital libraries of text in amorphous natural language, virtual museums with their bountiful graphical images, and stores of sound tracks, all being made available to increasingly discriminating search engines, pose two great challenges to computerized thought processes and machine epistemology: the search for, and extraction of, relevant information from masses of raw, undigested data--and what that means for knowledge representation, and, secondly, the processing of gleaned information to draw valid conclusions or respond to specific queries--and what that implies for knowledge derivation. It is the latter aspect I wish to address further. Symbolic deduction was one of the earliest human endeavors to be subjected to the scrutiny of computer science research. It is perhaps fair to say that logic is the "mother tongue" of machine intelligence. The Davis-Putnam procedure and others were invented for propositional reasoning and resolution was devised to facilitate inference in the predicate calculus. The past decade has seen the evolution of industrial-grade Boolean reasoning tools, equipped with novel search heuristics, thanks to which it is no longer unrealistic to formulate and solve large-scale real-world hardware and software verification problems in Boolean terms. Inductive provers were applied to many varied and difficult problems while a renewed emphasis on equational reasoning led to the well-publicized recent computer solution of the Robbins algebra conjecture. Concurrently, deep results based on progress in rewriting theory have been used to analyze and improve first-order theorem proving strategies. Significant theoretical advances have also been made in the rigid formalization of common sense and nonstandard reasoning. Simplification of expressions, by repeatedly rewriting subexpressions to equal terms, has demonstrated itself to be of critical import to symbolic computation, in general, and to modern theorem proving, in particular. Rewriting is used in symbolic computation, program-manipulation systems, and algebraic programming languages. Alas, contemporary symbolic computation systems are actually unsound and current theorem provers often employ circular tactics. The theory of rewriting, which centers around normal forms and conditions for their existence and uniqueness, can contribute to resolving these issues. Rewriting is also the mechanism by which deduction is performed when inference rules are used to describe a formal logic, theorem prover, or constraint solver. For example, the Knuth-Bendix equational "completion" procedure and its derivatives are founded upon this concept and stand out as uniquely successful forward reasoning engines, where the lack of direction inherent in reasoning from axioms, "bottom-up", is balanced by the powerful space-saving simplification process, which preserves only the simplest known consequences. Experimental evidence indicates that this class of theorem prover devotes much more time and effort to simplification than to ordinary deduction. This suggests that research in simplification, with its inherent compatibility with concurrent and distributed computation, will continue to increase in impact. More generally, the notion of redundancy, with a concomitant theory to support the completeness of inference despite deletion of redundant consequences, shows promise. In the future, symbolic computation will need to take well-foundedness of replacement operations and equivalence of the result more to heart. The next decade will hopefully see the development of deductive methods brought down to the level of day-to-day reasoning tasks. At the same time, the proliferation of earth-bound and space-borne software will increase the demand for deductive program synthesis and modification tools. This leads me to another pivotal issue, that of programming languages. In this realm, the most noticeable event of the past ten years was the development of a pointer-free object-oriented language for programming of web applications. But it is the further development of non-procedural paradigms and analytical tools that is essential to the long-term viability of large, modularly-constructed, intelligent-acting systems. It comes as small surprise that artificial-intelligence research centers were the birthplace and nurseries of practical languages having a logical core. The rule-based programming paradigm began with early expert systems and with logic programming and is manifest today both in the rewriting of functional languages and nondeterministic search of logic languages (albeit, with dramatic concessions to efficiency concerns which seriously compromised their logical underpinnings). More futuristic language designs (Planner at MIT and Qlisp at SRI come to mind) fell by the wayside because of, what was at the time, astronomical costs attached to each evaluation step. The rule-based approach, naturally friendly towards concurrent execution of subprocesses, has been resurrected quite recently in languages and systems that upgrade rules to "first class" objects, and in the rewriting calculi and logics that ensued. Related developments include several attempts at merging the equational and Horn-clause models. The most exciting direction is constraint-inference programming. Of course, constraint-based reasoning and constraint solving are old AI methodologies. In their more modern guise, constraints provide a language in which to express problem solving and a logical means of limiting search. Theoretical work on completeness of solvers will contribute to the projected success of this approach. Despite the profusion of computers within every walk of life, programming remains the esoteric pursuit of highly-trained practitioners. Anyone can navigate the web or put a family photograph on-line, but one would be hard put to find, say, an amateur expert system. True, many people can drive a car, though few can design one. But it takes only limited skills to plan and execute a toy truck, given a few hand tools and some scrap wood. Some day we will have analogous tools for home-brewed intelligent-systems.