Aside from simply examining the concepts that underlie the various constructs/features of programming languages, Sebesta aims also to evaluate those features with respect to how they impact the software development process, including maintenance.
So he sets forth a few evaluation criteria (namely readability, writability, reliability, and cost) and several characteristics of programming languages that should be considered when evaluating a language with respect to those criteria.
See Table 1.1 on page 8. Then, for each of the criteria, Sebesta discusses how each of the characteristics relates to it.
1.3.1 Readability: This refers to the ease with which programs (in the language under consideration) can be understood. This is especially important for software maintenance.
Example from assembly languages: In VAX assembler, the instruction for 32-bit integer addition is of the form
where each of the opi's can refer to either a register or a memory location. This is nicely orthogonal.
In contrast, in the assembly languages for IBM mainframes, there are two separate analogous ADD instructions, one of which requires op1 to refer to a register and op2 to refer to a memory location, the other of which requires both to refer to registers. This is lacking in orthogonality.
Too much orthogonality? As with almost everything, one can go too far. Algol 68 was designed to be very orthogonal, and turned out to be too much so, perhaps. As B.T. Denvir wrote (see page 18 in "On Orthogonality in Programming Languages", ACM SIGPLAN Notices, July 1979, accessible via the ACM Digital Library):
Intuition leads one to ascribe certain advantages to orthogonality: the reduction in the number of special rules or exceptions to rules should make a language easier "to describe, to learn, and to implement" — in the words of the Algol 68 report. On the other hand, strict application of the orthogonality principle may lead to constructs which are conceptually obscure when a rule is applied to a context in an unusual combination. Likewise the application of orthogonality may extend the power and generality of a language beyond that required for its purpose, and thus may require increased conceptual ability on the part of those who need to learn and use it.
As an example of Algol 68's extreme orthogonality, it allows the left hand side of an assignment statement to be any expression that evaluates to an address!
Other criteria (not deserving separate sections in textbook):
Portability: the ease with which programs that work on one platform can be modified to work on another. This is strongly influenced by to what degree a language is standardized.
Generality: Applicability to a wide range of applications.
Well-definedness: Completeness and precision of the language's official definition.
The criteria listed here are neither precisely defined nor exactly measurable, but they are, nevertheless, useful in that they provide valuable insight when evaluating a language.
1.4.1 Computer Architecture: By 1950, the basic architecture of digital computers had been established (and described nicely in John von Neumann's EDVAC report). A computer's machine language is a reflection of its architecture, with its assembly language adding a thin layer of abstraction for the purpose of making easier the task of programming. When FORTRAN was being designed in the mid to late 1950's, one of the prime goals was for the compiler to generate code that was as fast as the equivalent assembly code that a programmer would produce "by hand". To achieve this goal, the designers —not surprisingly— simply put a layer of abstraction on top of assembly language, so that the resulting language still closely reflected the structure and operation of the underlying machine. To have designed a language that deviated greatly from that would have been to make the compiler more difficult to develop and less likely to produce fast-running machine code.
The style of programming exemplified by FORTRAN is referred to as imperative, because a program is basically a bunch of commands. (Recall that, in English, a command is referred to as an "imperative" statement, as opposed to, say, a question, which is an "interrogative" statement.)
This style of programming has dominated for the last fifty years! Granted, many refinements have occurred. In particular, OO languages put much more emphasis on designing a program based upon the data involved and less on the commands/processing. But the notion of having variables (corresponding to memory locations) and changing their values via assignment commands is still prominent.
Functional languages (in which the primary means of computing is to apply functions to arguments) have much to recommend them, but they've never gained wide popularity, in part because they tend to run slowly on machines with a von Neumann architecture. (The granddaddy of functional languages is Lisp, developed in about 1958 by McCarthy at MIT.)
The same could be said for Prolog, the most prominent language in the logic programming paradigm.
Interestingly, as long ago as 1977 (specifically, in his Turing Award Lecture, with the corresponding paper appearing in the August 1978 issue of Communications of the ACM), John Backus (famous for leading the team who designed and implemented FORTRAN) harshly criticized imperative languages, asking "Can Programming be Liberated from the von Neumann Style?" He set forth the idea of an FP (functional programming) system, which he viewed as being a superior style of programming. He also challenged the field to develop an architecture well-suited to this style of programming.
Here is an interesting passage from the article:
Conventional programming languages are basically high level, complex versions of the von Neumann computer. Our thirty year old belief that there is only one kind of computer is the basis of our belief that there is only one kind of programming language, the conventional —von Neumann— language. The differences between Fortran and Algol 68, although considerable, are less significant than the fact that both are based on the programming style of the von Neumann computer. .Von Neumann programming languages use variables to imitate the computer's storage cells; control statements elaborate its jump and test instructions; and assignment statements imitate its fetching, storing, and arithmetic. The assignment statement is the von Neumann bottleneck of programming languages and keeps us thinking in word-at-a-time terms in much the same way the computer's bottleneck does.
1.4.2 Programming Method(ologie)s: Advances in methods of programming also have influenced language design, of course. Refinements in thinking about flow of control led to better language constructs for selection (i.e., if statements) and loops that force the programmer to be disciplined in the use of jumps/branching (by hiding them). This is called structured programming.
An increased emphasis on data (as compared to process) led to better language support for data abstraction. This continued to the point where now the notions of abstract data type and module have been fused into the concept of a class in object-oriented programming.
The four categories usually recognized are imperative, object-oriented, functional, and logic. Sebesta seems to doubt that OO is deserving of a separate category, because one need not add all that much to an imperative language, for example, to make it support the OO style. (Indeed, C++, Java, and Ada 95 all are quite imperative.) (And even functional and logic languages have had OO constructs added to them.)
Computers execute machine code. Hence, to run code written in any other language, first that code has to be translated into machine code. Software that does this is called a translator. If you have a translator that allows you to execute programs written in language X, then, in effect, you have a virtual X machine. (See Figure 1.2.)
There are three general translation methods: compilation, interpretation, and a hybrid of the two.
See Figure 1.3 for a depiction of the various phases that occur in compilation. The first two phases, lexical and syntax analysis, are covered in Chapter 4. The job of a lexical analyzer, or scanner, is to transform the text comprising a program unit (e.g., class, module, file) into a sequence of tokens corresponding to the logical units occurring in the program. (For example, the substring while is recognized as being one unit, as is each occurrence of an identifier, each operator symbol, etc.) The job of the syntax analyzer is to take the sequence of tokens yielded by the scanner and to "figure out" the program's structure, i.e., how those tokens relate to each other.
To draw an analogy with analyzing sentences in English, lexical analysis identifies the words (and possibly their parts of speech) and punctuation, which the syntax analyzer uses to determine the boundaries between sentences and to form a diagram of each sentence. Example sentence: The gorn killed Kirk with a big boulder.
S V D.O. gorn | killed | Kirk -------+--------+------- \T \w \h \i \e \t (adj) \h boulder -------------- \a \b \i (prep. \g phrase)
1.7.2 Pure Interpretation: Let X be a programming language. An X interpreter is a program that simulates a computer whose "native language" is X. That is, the interpreter repeatedly fetches the "next" instruction (from the X program being interpreted), decodes it, and executes it. A computer is itself an interpreter of its own machine language, except that it is implemented in hardware rather than software.
1.7.3 Hybrid: Here, a program is translated (by the same means as a compiler) not into machine code but rather into some intermediate language, typically one that is at a level of abstraction strictly between language X and machine code. Then the resulting intermediate code is interpreted. This is the usual way that Java programs are processed, with the intermediate language being Java bytecode (as found in .class files) and the Java Virtual Machine (JVM) acting as the interpreter.
Alternatively, the intermediate code produced by the compiler can itself be compiled into machine code and saved for later use. In a Just-in-Time (JIT) scenario, this latter compilation step is done on a piecemeal basis on each program unit the first time it is needed during execution. (Subsequent uses of that unit result in directly accessing its machine code rather than re-translating it.)