Skip to main content
Article
Optimizing Object Invocation Using Optimistic Incremental Specialization
Computer Science Faculty Publications and Presentations
  • Jon Inouye, Oregon Graduate Institute of Science & Technology
  • Andrew P. Black, Oregon Graduate Institute of Science & Technology
  • Charles Consel, Oregon Graduate Institute of Science & Technology
  • Calton Pu, Oregon Graduate Institute of Science & Technology
  • Jonathan Walpole, Oregon Graduate Institute of Science & Technology
Document Type
Technical Report
Publication Date
1-1-1995
Subjects
  • DCOM (Computer architecture),
  • Object-oriented methods (Computer science)
Abstract

To make object invocation efficient, it is important to minimize overhead. In general, overhead is incurred in order to maintain transparency; with the advent of mobile computer systems, persistence, increasing security and privacy concerns, transparency becomes more expensive and overhead is increasing. Invocation mechanisms maintain transparency by finding objects, choosing communication media, performing data translation into common formats (e.g., XDR), marshalling arguments, encrypting confidential data, etc. Performing all of these operations on every invocation would lead to unacceptable performance, so designers often avoid operations by specializing object invocation for more restricted environments. For example, the Emerald compiler performs several optimizations when an object is known to be always local: the object is referenced with a location-dependent pointer that saves both space and access time and the invocation code performs no residency checks. Additionally, if the concrete type of the implementation is known, operations on the object can be in-lined. Unfortunately, if the object cannot be guaranteed to be local at compile-time, the Emerald compiler cannot perform any of these optimizations.

Contemporary distributed object systems remove overhead by building invocation mechanisms out of multiple modules. Each module provides functionality for a specific situation. Run time checks are inserted into the invocation path to interpret the situation and select the appropriate module. COOL optimizes local invocation by making use of the C++ virtual function mechanism to convert from remote calls to direct calls and vice-versa. During every invocation COOL implicitly checks the server's location. When the client and server are located in the same address space, the private virtual pointer of the interface object is modified to point directly to the virtual table of the server's class. The problem with this approach is that the invocation interface has to interpret the caller's context in order to choose the appropriate specializations.

This paper advocates a general technique, called optimistic incremental specialization, that addresses two limitations mentioned previously. First, can we optimize on "invariants" that are not guaranteed? Second, can we use specialized implementations and avoid inserting run-time checks in the invocation path? Section 2 describes optimistic incremental specialization and section 3 discusses our current status and open issues. We review, related research in section 4 and summarize in section 5.

Description

An Oregon Graduate Institute of Science & Technology position paper. No date appears on the resource; it appears to have been published in 1995.

Persistent Identifier
http://archives.pdx.edu/ds/psu/10567
Citation Information
Jon Inouye, Andrew Black, Charles Consel, Calton Pu and Jonathan Walpole, "Optimizing Object Invocation Using Optimistic Incremental Specialization," Oregon Graduate Institute of Science & Technology. [1995]