   DIMACS Workshop on Trust Management in Networks
   September 30 - October 2, 1996
   South Plainfield, NJ

   Abstracts of Papers. Distributed at the Conference.

   _________________________________________________________

   Presented September 30, 1996
   _________________________________________________________

   Let A Thousand (Ten Thousand?) CAs Reign

   Stephen Kent, BBN Corporation


   Abstract: Early models of formal (e.g., as opposed to the
   informal PGP-style) certification systems often embodied
   a notion that a single certificate could be issued to a
   user to represent that user in a multitude of
   interactions with many different services. However,
   establishing certification authorities (CAs) that try to
   satisfy many different requirements has proven difficult.
   A company operating a generic CA service must balance
   liability concerns, acceptable cost models, levels of
   authentication assurance. and name space issues.

   Another approach to certification, motivated by the
   observation that individuals have many existing
   relationships with various organizations, is gaining
   popularity. This approach leverages existing databases
   maintained by organization to track employees, customers,
   members, etc. The identities that form the keys to these
   databases are typically account numbers, name forms that
   have only local significance. Certificates issued by
   organizations not for general use, but focused on a
   specific application, avoid many of the problems facing
   generic Cas. For example, liability can be well
   understood because the certificate is bounded in its use.
   The level of assurance for authentication is determined
   solely by the issuer, in the context of the application,
   and the issuer's database provides data associated with
   the subject that may be used to support online
   registration with fairly high levels of assurance. Naming
   problems disappear because the subjects are already
   assigned names (of only local significance) in the
   issuer's database.

   This model of certification is not a panacea; it would
   not be ideal for applications such as global e-mail and
   it certainly is not designed for distributed system
   environments (e.g., DCE or CORBA). It is best suited to
   certificates issued to individuals for user-organization
   interactions, as opposed to certificates issued to
   organizations for inter-organization interactions.
   However, many of the relationships (affiliations) that
   characterize everyday life do fit this model nicely.
   Moreover, with simple web browser features, the problems
   of selecting the right certificate for a specific
   interaction can be made automatic, so that users are not
   burdened by the plethora of certificates that would
   result from this model.

   X.509 version 3 certificates are well suited to implement
   this model of certification, making use of the "standard"
   extensions, e.g., the General Name forms supported by the
   Issuer and Subject Alternative Name extensions. Moreover,
   the flexibility provided by many other X.509 v3
   extensions facilitate controlled cross-certification, in
   those instances where it is appropriate.

   For more information, contact kent@bbn.com

   _________________________________________________________

   The PolicyMaker Approach to Trust Management

   Matt Blaze, Joan Feigenbaum, and Jack Lacy, AT&T
   Laboratories


   Abstract: In a recent paper [BFL], we argue that the
   "trust management problem" is a distinct and important
   component in the design of network services. For example,
   the use of public-key cryptography on a mass-market scale
   requires sophisticated mechanisms for managing trust. Any
   application that receives a signed request for action is
   forced to answer the central question "Is the key used to
   sign this request authorized to take this action?" In
   certain applications, this question reduces to "Does this
   key belong to this person?" In others, the authorization
   question is considerably more complicated, and resolving
   it requires techniques for formulating security policies
   and security credentials, determining whether particular
   sets or credentials satisfy the relevant policies, and
   explicitly placing trust in third parties that must issue
   credentials and author policies.

   In this talk, we will explain our general approach to the
   problem and our "trust management system," called
   PolicyMaker.

   Key ideas that inform our approach include:

   Unified mechanism: Policies, credentials, and trust
   relationships are expressed as programs in a simple
   programming language. Existing systems are forced to
   treat these concepts separately. By providing a common
   language for policies, credentials, and relationships, we
   make it possible for diverse network applications to
   handle trust management in a comprehensive and largely
   transparent manner.

   Separation of mechanism from policy: The mechanism for
   verifying credentials does not depend on the credentials
   themselves or the semantics of the applications that use
   them. This allows many different applications with widely
   varying policy requirements to share a single certificate
   verification infrastructure.

   Flexibility: Our system is expressively rich enough to
   support the complex trust relationships that can occur in
   the very large-scale network applications currently being
   developed. At the same time, simple and standard
   policies, credentials, and relationships can be expressed
   succinctly and comprehensibly. In particular, PGP and
   X.509 "certificates" need only trivial modifications to
   be usable in our framework.

   Locality of control: Each party in the network can decide
   in each circumstance whether to accept the credentials
   presented by a second party or, alternatively, on which
   third party it should rely for the appropriate
   "certificate."

   PolicyMaker is now being used to manage trust in several
   applications, including e-mail, electronic licensing, and
   Internet content-labelling.

   [BFL] M. Blaze, J. Feigenbaum, and J. Lacy,
   "Decentralized Trust Management," IEEE Symposium on
   Security and privacy, Oakland, CA, May, 1996. L~

   More information: mab@research.att.com,
   jg@research.att.com, lacy@research.att.com

   _________________________________________________________

   SDSI -- A Simple Distributed Security Infrastructure

   Butler Lampson and Ron Rivest, Microsoft and MIT


   Abstract: We propose a new, distributed security
   infrastructure called, SDSI (pronounced "sudsy"). SDSI
   combines a simple public-key infrastructure design with a
   means of defining groups and issuing group membership
   certificates. SDSI's groups provide simple, clear
   terminology for defining access-control lists and
   security policies. SDSI's design emphasizes linked local
   name spaces rather than a hierarchical global name space,
   though it gracefully accommodates common roots such as
   DNS.

   A key can delegate to a group the authority to sign
   certificates on behalf of the key. The delegation can be
   limited to certificates that match a template.
   Certificates can time out, and they can be reconfirmed by
   an on-line agent acting for the issuer.

   SDSI is optimized for an on-line environment in which
   clients can interact with servers to learn what
   credentials are needed to satisfy a request, and can
   retrieve the needed credentials from other severs. In
   this environment the system is auto-configuring: there is
   no need to preload either clients or servers with
   anything other than their private keys and the
   definitions of their local name spaces.

   For more information, see:
   http://theory.lcs.mit.edu/~rivest/sdsi.ps
   version 1.0 frozen
   version 1.1 under development

   _________________________________________________________

   SPKI Certificates

   Carl Ellison, Cybercash


   Abstract: The Simple Public Key Infrastructure [SPKI]
   group has come up with a proposed certificate structure
   to fit the charter of the group. From the message
   launching the SPKI group on 22 Feb 1996...

   According to the proposed charter, the SPKI group will:

   "Develop Internet standards for an IETF sponsored public
   key certificate format, associated signature and other
   formats, and key acquisition protocols. The key
   certificate format and associated protocols are to be
   simple to understand, implement, and use."

   An SPKI Certificate addresses the primary needs of
   developers with as few options and as little overhead as
   possible. Those needs are, in order as far as the SPKI
   group has been able to determine:

   1) to know with assurance if a public key is authorized
   to perform some action, eg.: a) access a system b) spend
   money from a given account c) write a purchase order d)
   sign a contract binding a company e) pay taxes f) vote g)
   .. etc.

   2) to grant or revoke authorizations for named groups of
   keyholders

   3) to delegate authority to temporary keys [allowing for
   the possibility that long-lived public keys are not made
   generally available]

   4) to associate a public key with a name -- most
   especially, for individual mail privacy and standard ACL
   uses, using a local name meaningful to the person sending
   the e-mail or writing the ACL. These names should be
   thought of as nicknames or SDSI names, rather than global
   names. [We do not believe there is, can or should be any
   such thing as a global name.]

   5) to disseminate useful information, secure from
   tampering (e.g., one's own preferred e-mail address or
   name)

   An SPKI certificate does not directly address the
   specification of complex policies as in PolicyMaker or
   the distribution of keys and certificates as in DNSSEC.
   We intend to rely on that work if possible rather than
   re-create it.

   The SPKI group sees items (1) through (3) as the
   predominant need for public key trust certification and
   therefore we label SPKI certificates "Authorization
   Certificates" to distinguish them from Identity
   Certificates (such as X.509). However, one form of
   certificate, (4), includes identity certification as a
   subset. [As Rivest and Lampson point out, a commercial CA
   is one owner of a local namespace and can issue SDSI
   names just as anyone else can. It can therefore issue
   SPKI certificates as well.]

   An SPKI certificate is limited to the following fields:

   Subject:, Issuer:,

   and a signature on that body gives a specific
   authorization -- and is indefinitely extensible to
   include any authorization (or name specification,
   PolicyMaker statement or S-expression) of interest to a
   user of certificates. Extensions do not require
   interactions with any standards body.

   The current draft specification is available at
   http://www.clark.net/pub/cme/spki.txt or
   ftp://ftp.clark.net/pub/cme/spki.txt and that draft goes
   into significant detail on kinds of authorization as well
   as a process for reducing chains or meshes of
   certificates (possibly of mixed format (X.509, SDSI,
   PolicyMaker, SPKI)) into a signed certificate result
   which is, itself, an SPKI certificate. That draft also
   specifies the simple binary encoding for SPKI
   certificates and a short summary of the reasoning behind
   the rejection of ASN.1 and X.509.

   For more information, contact cme@cybercash.com

   SPKI mailing list: SPKI at majordomo@c2.org

   _________________________________________________________

   Scheduled for October 1, 1996
   _________________________________________________________

   Using PICS Labels for Trust Management

   Rohit Khare


   Abstract: As Web and Internet usage expands into new
   application domains, users need automatable mechanisms to
   establish trust for information they use. The Platform
   for Internet Content Selection (PICS) is a scheme for
   rating and labeling resources that is machine-readable
   and can accommodate a wide variety of rating schemes.
   When combined with digital signatures to establish
   cryptographic authentication, PICS labels could form the
   basis for user-definable trust policies on the Internet.

   PICS allows rating systems to define scales for
   describing content, and for many rating services to label
   resources with their evaluations. This allows labels to
   be provided by authors or by third parties and to be
   presented with the content or from separate label
   bureaus. User agents can dynamically construct user
   interfaces to represent labels and constraints on
   acceptable ratings. When the resulting decisions are
   broadened from "show/don't show this page to the user",
   one can imagine:

      "execute any code from SoftwarePublisher, Inc."

       "execute any code above 3/5 on the InfoWeek quality
      scale"

      "trust any identity certificate above Class 2 from
      VeriCert"

      "highlight documents labelled 'true' by their signers"

   We present this system in the context of several
   near-term industrial scenarios: evaluating and executing
   programs ("applets"), configuring acceptable
   certification authorities, and distributing signed
   documents. In each case, PICS offers a flexible,
   user-configurable mechanism for specific trust management
   applications.

   Open issues to be discussed include:

      Interaction with Public Key Infrastructures

      Cryptographic formats and capabilities

      Evolution of PICS rating syntax (currently rational
      numbers)

      Embedding PICS labels within certificates (X.509,
      SDSI)

   This talk is based on work done at the World Wide Web
   Consortium with its Digital Signature Initiative Group
   and Security Editorial Review Board.

   For more information, contact khare@w3.org.

   _________________________________________________________

   Managing Trust in an Information-Labeling System

   M. Blaze (1), J. Feigenbaum (1), P. Resnick (1), M.
   Strauss (2),
   1. AT&T Laboratories
   2. AT&T Laboratories and Iowa State University


   Abstract: Rapid growth in the Internet has focussed
   attention on a problem common to every medium that serves
   large and diverse audiences: Not all material is suitable
   for all audience members. Traditionally, broadcast media
   such as television and radio have been subject to more
   restrictions than print media, for exactly this reason.
   The PICS information-labeling system [RM] provides a
   flexible approach to filtering information at the point
   of reception rather than at the point of distribution,
   thus holding out the possibility of avoiding government
   censorship in the process of controlling access to
   information on the Internet. The success of PICS
   (Platform for Internet Content Selection) as an approach
   to access control requires a mechanism for trust
   management.

   The PICS approach stipulates that documents will have
   labels, formatted in a uniform way specified in the PICS
   standard, that describe relevant aspects of their
   contents. For example, the RSAC (Recreational Software
   Advisory Council) scheme assigns four numbers to a
   document, in an attempt to indicate how much sex, nudity,
   violence, or potentially offensive language the document
   contains. Other schemes may label documents according to
   entirely different criteria, e.g., whether they contain
   material in specific topical areas. PICS-compliant client
   software will examine the label(s) on a document and
   decide whether the document satisfies all of the
   requirements specified in the local "PICS profile" or
   access policy. For example, a profile might state
   "Viewing is allowed if the document is labeled two or
   less on the violence scale, and the label is certified as
   accurate by Good Housekeeping." Crucial trust management
   questions thus include "how are access policies
   expressed?" and "whom does a given recipient trust to
   label documents?"

   This paper shows how to use the PolicyMaker system [BFL]
   to solve the trust management problem in PICS. Although
   PolicyMaker was originally designed to address trust
   management problems in network services that process
   signed requests for action and use public-key
   cryptography, it is applicable mutatis mutandis to the
   trust management problem for information-labeling. The
   question "is the key used to sign this request authorized
   to take this action?" corresponds to "do the labels on
   this document satisfy the viewing requirements of this
   viewer?" Similarly, the local policy that "I trust this
   certifying authority to authorize keys for this category
   of actions" is the analog of "I trust this rating
   authority to label documents in this category." This
   unforeseen use of the PolicyMaker framework is evidence
   of the framework's power and adaptability.

   [BFL] M. Blaze, J. Feigenbaum, and J. Lacy,
   "Decentralized Trust Management," IEEE Symposium on
   Security and Privacy, Oakland CA, May 1996.

   [RM] P. Resnick and J. Miller, "PICS: Internet Access
   Controls Without Censorship," Communications of the ACM,
   October 1996.

   For more information: mstrauss@cs.iastate.edu

   _________________________________________________________

   Trust Management In Web Browsers, Present and Future

   Drew Dean, Edward W. Felten, and Dan Wallach, Princeton
   University


   Abstract: This talk will discuss how trust relationships
   are modeled and managed in the two most popular Web
   browsers, Netscape Navigator and Microsoft Internet
   Explorer. There are two competing trust models, the
   shrink-wrap model and the Java model, each of which is
   supported by both browsers. We will compare the strengths
   and weaknesses of the models, and describe how each must
   evolve in order to survive. Finally, we will predict what
   kind of trust model will be supported by future browsers.

   For more information. contact felten@cs.princeton.edu.

   _________________________________________________________

   IBM Cryptolopes, SuperDistribution and Digital Rights
   Management

   Marc A. Kaplan, IBM


   Abstract: We will present and discuss the concept and
   implementation of a method permitting the super
   distribution, sale and controlled access of digital
   documents using secure cryptographic envelopes which we
   call the Cryptolope architecture. The Cryptolope
   architecture integrates encryption, authentication, and
   key management for digital documents ("content") together
   with digital fingerprinting and watermarking and
   rights/royalties management to comprise a secure
   distributed system for managing digital content.

   Cryptolope technology comprises a method for the
   controlled access to broadcasted information using
   cryptographic techniques including public and secret key
   cryptography, cryptographic hashes, and digital
   fingerprints.

   Trust Relationships, Roles

   Multiple, non-symmetric trust relationships exist among
   the entities playing several "roles" within a
   superdistribution system based on Cryptolopes.

   The major roles are those of the publisher, the consumer,
   and the royalty clearing center (RCC). Ancillary roles
   are those of the local distributor, and the local DFWM
   (decryption, fingerprinting and watermarking) agent. Of
   course, there may be many entities each playing one or
   more roles.

   A publisher packages and distributes Cryptolopes which
   comprise (encrypted) "content" along with licensing terms
   and conditions (T&Cs) and fingerprinting/watermarking
   instructions. The encryption keys are logically
   "escrowed" to one or more royalty clearing centers.

   A consumer who wishes to "open" a Cryptolope engages in a
   transaction with a royalty clearing center (RCC) which
   collects licensing or usage fees from the consumer
   according to the T&Cs specified within the Cryptolope,
   and in exchange the RCC releases the decryption keys to a
   DFWM. For purposes of scalability and accessibility a
   consumer and an RCC may interact via a local distributor.
   The DFWM in turn, releases decrypted but fingerprinted
   and/or watermarked content to the consumer.

   Some of the "trust relationships" in this system are:

   The RCC. consumer, distributor and DFWM must trust the
   integrity and authenticity of a Cryptolope, as attested
   to by the presence of a trusted Publisher's (digital)
   signature on the (table of contents of the) Cryptolope.

   The publisher must trust the RCC and its agents (DFWMs
   and distributors) to enforce the T&Cs, including
   collection and payment of royalties to the proper
   intellectual property rights holders, and to never
   purposely divulge the decryption keys nor unmarked
   content to a consumer.

   The consumer must trust the RCC (and its agents) to
   reveal the (decrypted) contents of a Cryptolope, in
   exchange for the royalty payments.

   During each transaction, each party must trust the
   identity of the other. For example. the RCC and the
   consumer must "authenticate" each other - so that: the
   consumer is assured she is dealing with an RCC that is
   authorized to collect royalties on behalf of the
   publisher of the Cryptolope; the RCC is assured that the
   consumer has access rights to, and/or sufficient funds
   for, the contents of the Cryptolope.

   Certificates and Credentials

   We use conventional (RSA) public key signature technology
   to establish the integrity and authenticity of
   Cryptolopes. Public key certificates are used to
   "guarantee" signatures. Certificates may be carried by or
   pointed to (via URLs) by the Cryptolope.

   A consumer may hold "digital credentials," which are
   signed digital records attesting to her memberships,
   affiliations or subscriptions. The RCC, when enforcing
   the T&Cs of a Cryptolope, takes the credentials of the
   consumer into account to check whether the consumer is
   allowed to read the contents, and if so, what is the
   correct royalty rate. Eg., a member or subscriber may be
   offered a special discount or access privileges.

   Key Management with a Lattice of Trust

   The many cryptographic keys used for both encrypting the
   actual contents of Cryptolopes and for encrypting
   (escrowing) encryption keys can be viewed as existing in
   a "lattice" of keys.

   A "lowest" level-0 or "leaf" key in the lattice can
   decrypt only a single document or page of content. A
   level-1 key may control access to all the contents of a
   single cryptolope. A level-2 key may control access to a
   set of cryptolopes. In general, a higher level key can be
   used to decrypt several (escrowed) lower level keys and
   thus gain access to many documents.

   Typically, a DFWM is only trusted with access to level-0
   and level-1 keys: a distributor is trusted only with
   access to keys that decrypt the Cryptolopes with which it
   deals; an RCC is trusted with access to keys that can be
   used to decrypt all the documents of all the publishers
   for which it is an authorized royalty collector.

   Key-encrypting-keys can be either public (RSA) keys or
   symmetric secret (eg. 3-DES) keys. Using the public key
   of an RCC, a publisher can effectively escrow its
   Cryptolope key-encrypting-keys with an RCC without
   exchanging "secrets" - indeed, a publisher does need not
   to engage in any communication with an RCC, prior to
   issuing a Cryptolope.

   A publisher can escrow its Cryptolope keys with multiple
   RCCs, each RCC using a different public key. It is this
   scenario which leads to a "broad" lattice of keys that
   has no single highest level key.

   For more information, contact kaplan@watson.ibm.com

   _________________________________________________________

   Requirements and Approaches for Electronic Licenses

   David Maher, AT&T Laboratories


   Abstract: Electronic Licensing schemes have been around
   for several years. They have mostly been restricted to
   administration of software licenses over corporate local
   area networks. As a much wider variety of intellectual
   property ("IP" or content") is distributed over public
   wide area networks such as the Internet, it appears that
   new schemes are needed to allow maintenance and control
   of IP rights and to support the economic value of the
   content. Current schemes for content distribution from
   IBM, EPR, AT&T and others all use a secure container
   paradigm whereby content is cryptographically placed in a
   secure container that can be freely distributed, while
   the access key is independently provided to those wishing
   to use the IP, often after they pay. There remains the
   question of how the access key is distributed, and how to
   control use of the IP by those who have access to the
   key.

   Whereas E-cash has been said to involve a marriage
   between Cryptography and Economics, Electronic Licensing
   involves a love-triangle comprising Cryptography,
   Marketing, and Economics. This is due to the fact that
   distribution of goods and services in a modern economy
   involves concepts of product bundling, multi-channel and
   multi-tier distribution, leasing, subscriptions,
   amalgamation, discounting, sampling, promotional offers,
   etc. Electronic distribution over networks will allow
   even more creative schemes to be devised, and therefore
   licensing approaches that overly constrain the freedom of
   marketing and the economic power of open markets will
   destroy the balance of this threesome.

   Recognizing this, we examine various licensing schemes
   and show why we favor schemes that require a highly
   distributed trust model. We show that a licensing scheme
   with such a trust model can be implemented using the
   PolicyMaker system of Blaze, Feigenbaum, and Lacy. We
   also discuss related issues of license enforcement and
   payment systems.

   _________________________________________________________

   PathServer

   Michael Reiter and Stuart Stubblebine, AT&T Laboratories


   Abstract: Authenticating the source of a message in a
   large distributed system can be difficult due to the lack
   of a single authority that can tell for whom a channel
   speaks. This has led many to propose the use of a path of
   authorities, each able to authenticate the next, such
   that the first authority in the path can be authenticated
   by the message recipient and the last authority in the
   path can authenticate the message source. In this talk we
   suggest the use of multiple such paths to bolster
   assurance in the authentication of the message source,
   and explore properties of those paths that strengthen
   authentication. We demonstrate this approach with
   PathServer, a web-based service for locating paths from a
   trusted key to a query key in the PGP framework. We
   describe the challenges in building PathServer,
   experience with its usage, and ongoing work.

   For more information, see
   http://www.research.att.com/~reiter/PathServer.

   _________________________________________________________

   Inferno Security

   David Presotto, Bell Labs, Lucent Technologies


   Abstract: As telecommunications, entertainment, and
   computing networks merge, a wide variety of services will
   be offered on a diverse array of hardware, software, and
   networks. Inferno provides a uniform execution
   environment for applications and services in this chaotic
   world. Inferno comprises a networked operating system
   that can run native or above a commercial operating
   system, a virtual machine, a programming language,
   protocols, and standard interfaces for networks,
   graphics, and other system services. This talk describes
   both the security features currently in Inferno and those
   we intend to move to.

   Inferno currently uses public key cryptography only for
   authentication. The Station to Station protocol (STS)
   using Elgamal certificates provides mutual authentication
   between parties. Authentication also yields a mutually
   held secret that can be used to encrypt the conversation
   or to add a cryptographic hash to each message sent.
   Rather than reinvent the wheel, we use the same line
   format as SSL.

   Two methods are used for certificate creation: a one time
   registration procedure and a login procedure. The
   registration procedure requires a conversation between
   the CA and user during each registration. The login
   procedure requires one only when a password is assigned.
   Login uses a Bellovin-like encrypted key exchange.

   Our trust relations are currently too simplistic;
   communicating parties must have keys signed by the same
   certifying authority. There are no attributes attached to
   certificates. This is sufficient for authentication but
   not for anything more advanced such as signing code,
   passing trust to third parties, etc. We are currently
   trying to build extensible certificates in the same vein
   as PolicyMaker and SDSI so that we can embed more
   semantics into them and reason on it.

   For more information, see http://inferno.lucent.com/

   _________________________________________________________

   Transparent Internet E-mail Security

   Raph Levien, Lewis McCarthy, and Matt Blaze, AT&T
   Laboratories


   Abstract: This paper describes the design and prototype
   implementation of a comprehensive system for securing
   Internet e-mail transparently, so that the only user
   intervention required is the initial setup and
   specification of a trust policy. Our system uses the
   PolicyMaker trust management engine [BFL] for evaluating
   the trustworthiness of keys, in particular whether the
   given binding between key and name is valid. In this
   approach, user policies and credentials are written as
   predicates in a safe programming language. These
   predicates can examine the graph of trust relationships
   among all the credentials presented. Thus, credentials
   can express higher-order policies that depend upon global
   properties of the trust graph or that impose specific
   conditions under which keys are considered trusted.
   "Standard" certificates, such as PGP and X.509, are
   automatically translated into simple PolicyMaker
   credentials that indicate that the certifier trusts a
   binding between a key and a name and address, and
   certifiers can also issue more sophisticated credentials
   written directly in the PolicyMaker language.

   Our system does not assume any particular public key,
   certificate, or message format. Our prototype
   implementation, which runs under most versions of Unix,
   accepts PGP key certificates as well as our own
   credentials, and uses standard PGP message formats. Thus,
   our system interoperates with the existing infrastructure
   of secure e-mail applications while providing additional
   flexibility at those sites where the system is used. We
   plan also to support SMIME and other message formats,
   X.509 certificates, and Win32-based platforms.

   [BFL] M. Blaze, J. Feigenbaum, and J. Lacy,
   "Decentralized Trust Management," IEEE Symposium on
   Security and Privacy, Oakland CA, May 1996.

   For more information, contact raph@cs.berkeley.edu,
   lmccarth@cs.umass.edu, or mab@research.att.com

   _________________________________________________________

   Cryptographically Secure Digital Time-Stamping to Support
   Trust Management

   Stuart Haber and Scott Stornetta, Bellcore and Surety
   Technologies (respectively)


   Abstract: A good algorithm was recently proposed for the
   problem of cryptographically secure digital time-stamping
   [reference below]. Users of this scheme can certify their
   digital documents, computing for any particular document
   a concise time-stamp certificate. Later, any user of the
   system can validate a document-certificate pair,
   verifying that the document existed in exactly its
   current form at the time asserted in the certificate. The
   scheme depends for its security on the use of one-way
   hash functions and on the reliable availability of
   certain hash values. Significantly, there is no
   requirement that an agent be trusted or that a
   cryptographic key be kept secret.

   Most digital signature systems include, as part of the
   procedure for validating a document and its signature, a
   mechanism for verifying some properties of the signer's
   public key. Typically, this involves the validation of
   another digital signature on an assertion that these
   properties hold during a specified period of validity.
   Therefore, the validator needs to be able to check that
   the signature was computed during this period. We propose
   that the easiest way to do this, especially for long-
   lived documents, is to accompany the document and its
   signature by a time-stamp certificate for the
   document-signature pair, computed immediately after the
   signature is computed, and to include the validation of
   this certificate as part of the validation of the
   signature. This would allow, for example, the continued
   attribution of trustworthiness to a particular RSA
   digital signature, even if a significant later advance in
   factoring algorithms made the signer's choice of
   key-length completely insecure for the computation of new
   signatures.

   But what about advances in attacking one-way hash
   functions? In fact, time-stamp certificates can be
   renewed so as to remain valid indefinitely -- as long as
   the maintainers of a secure digital time-stamping service
   keep abreast of the state of the art in constructing and
   in attacking cryptographic hash functions. The renewing
   process works as follows. Suppose that c is a valid
   time-stamp certificate, in the current system, for a
   document x. Further suppose that a new time-stamping
   system is implemented, for example by replacing the hash
   function used in the old system. Now let c' be the
   new-system time-stamp certificate for the compound
   time-stamp request (x, c). Even if the old system is
   compromised at a definite later date, the new certificate
   c' provides trustworthy evidence that x existed at the
   time stated in the original certificate.

   This digital time-stamping scheme can also be adapted so
   as to assign a succinct, meaningful and cryptographically
   verifiable name or "serial number" to any digital
   document.

   The time-stamping scheme was described in: D. Bayer, S.
   Haber, and W.S. Stornetta, "Improving the efficiency and
   reliability of digital time-stamping." In Sequences II:
   Methods in Communication, Security, and Computer Science,
   ed. R.M. Capocelli. A. De Santis, U. Vaccaro, pp.
   329-334, Springer-Verlag (New York, 1993).

   Commercial implementation from Surety Technologies, a
   Bellcore spinoff.

   More information: stuart@bellcore.com,
   http://www.surety.com

   _________________________________________________________

   Untrusted Third Parties: Key Management for the Prudent

   Mark Lomas and Bruno Crispo, Cambridge University


   Abstract: The "flavour of the month" in
   distributed-system security appears to be TTPs (Trusted
   Third Parties). Bob Morris has described TTPs as "parties
   who can violate your security policy without detection".
   Instead, I prefer to think of parties who may be
   privileged, in the sense that they can perform acts that
   you and I can't do, but whose actions may be audited. I
   call these "Untrusted Third Parties".

   I should perhaps make it clear that a party may be
   trusted but untrustworthy or trustworthy but not trusted.
   The fatal mistake in security system design is to assume
   that the terms "trusted" and "trustworthy" are
   synonymous.

   We have been building untrusted key certification and
   revocation services and an explicit audit policy that
   allows us to determine whether these services have
   misbehaved. Interestingly, such distrust may be of
   benefit not just to the customer, but also to the service
   provider.

   For more information, contact mark.lomas@cl.cam.ac.uk.

   _________________________________________________________

   Distributed Commerce Transactions: Structuring
   Multi-Party Exchanges into Pair-wise Exchanges

   Steven Ketchpel and Hector Garcia-Molina, Stanford
   University


   Abstract: One of the benefits of network commerce is the
   ability to interact with geographically remote parties,
   including those that have no previous history or even
   certainty about the other's identity. In such an
   uncertain world with weak enforcement procedures,
   electronic buyers and sellers may be well-advised to take
   security precautions. In the simplest case, this might
   just be conducting the exchange through an intermediary
   that is trusted by both parties.

   However, as the transaction becomes more complex,
   involving multiple sources and information brokers, it is
   not clear that there will be a single intermediary that
   all will trust. Instead. we assume only that pairs of
   agents who wish to make an exchange can find an
   intermediary which the two of them can trust.
   Consequently, a single multi-party exchange will be
   broken into several two-party exchanges, each using a
   (potentially) different trusted intermediary. The
   ordering in which the pair-wise exchanges are executed is
   critical, since some situations place control of the
   successful completion of the transaction in the hands of
   an untrusted agent. For example, a customer who wishes to
   obtain two documents and finds only their conjunction
   useful may be disappointed to spend half of his money to
   obtain one document, with the second being unavailable.
   Similarly, a broker may purchase a document in order to
   re-sell it to a customer, only to find the customer is no
   longer willing to buy it. Indeed some transactions simply
   cannot be broken down to a sequence of pairwise
   transactions that protects all of the parties involved.

   In [kgm96], we introduce the notion of a distributed
   transaction, which describes the agents involved in a
   transaction, their connectivity, resources, and available
   actions. The goal of performing a riskless distributed
   transaction is to locate a sequence of actions that makes
   use of only pair-wise exchanges through locally trusted
   intermediaries. The action sequence should achieve some
   desired outcome while never causing an agent to run the
   risk of ending in an undesirable state, even if other
   agents deviate from their expected actions.

   A tech report in preparation [kgm96b] gives a distributed
   algorithm for finding riskless action sequences. The
   algorithm is proven sound and complete, so that it will
   never generate an unsafe sequence, and if a sequence
   exists, the algorithm will find it. Extensions for cases
   where one agent does directly trust another and the
   presence of hard, real-time deadlines are also shown
   sound and complete. Future work will develop these
   notions in a decision theoretic framework, permitting
   agents to take risky actions if they have a positive
   expected utility. Soft deadlines (decreasing utility over
   time) will also be modeled.

   [kgm96] Ketchpel, Steven P. and Hector Garcia-Molina.
   "Making Trust Explicit in Distributed Commerce
   Transactions". In International Conference on Distributed
   Computing Systems (DCS '96). Available at:
   http://db.stanford.edu/~ketchpel/papers/DCS96/
   distributed-transactions.ps

   [kgm96b] Ketchpel, Steven P. and Hector Garcia-Molina. "A
   Sound and Complete Distributed Algorithm for Distributed
   Commerce Transactions." Stanford Digital Library Working
   Paper SIDL-1996-0040. Preliminary draft available at:
   http://www-diglib.stanford.edu/cgi-bin/WP/get/
   SIDL-WP-1996-0040

   For more information, contact
   ketchpel@hotspur.stanford.edu

   _________________________________________________________

   Scheduled for October 2, 1996
   _________________________________________________________

   Policy-Controlled Cryptographic Key Release

   Dennis K. Branstad and David A. McGrew, Trusted
   Information Systems, Inc.


   Abstract: In early 1995, Trusted Information Systems, in
   conjunction with four individuals from academia and
   industry, proposed to design and develop a language and
   system for use by individuals or their organizations to
   specify policies for controlling the release (temporary
   use or permanent transfer) of their cryptographic keys.
   Work has recently begun on a DARPA Policy-Controlled
   Cryptographic Key Release research contract for
   specifying and enforcing certain aspects of information
   protection and use in a range of commercial and
   unclassified military applications.

   The Key-Release Policy (KRP: short title) project is
   developing a dynamic, policy-based automated system for
   cryptographic key release (distribution or use). Real and
   hypothetical requirements for specifying the conditions
   (i.e., events) under which a cryptographic key shall be
   released are currently being collected. A language is
   being defined to specify these conditions, with inputs
   from users, managers, and others having responsibility
   for the protection of information. The language will
   allow for the establishment, dis-establishment, and
   delegation of access permissions that can be assured by
   controlling an encryption or signature key.

   Systems are being designed for automatically and
   accountably enforcing the policies, and analyzing the
   policies for completeness, consistency, and correctness.
   Formal methods will be used to the greatest feasible
   extent. Automated verification will be used where
   possible with requests for human resolution made when
   ambiguous, inconsistent, or mutually exclusive policy
   statements are made for the release of a key. Emphasis is
   being placed on simple user-system interfaces, with
   translations from human understandable language to
   machine enforceable language.

   Burt Kaliski, Warwick Ford, Russ Housley, and Dorothy
   Denning are participating in the project. The works of
   Ford and Wiener on A Key Distribution Method for
   Object-Based Protection; Rivest and Lampson on SDSI;
   Blaze, Feigenbaum, and Lacy on Decentralized Trust
   Management; and Boneh and Lipton on A Revocable Backup
   System are being reviewed as relevant to the project.
   Other references, collaborators, and interested parties
   are being sought.

   An experimental system is now being constructed in Secure
   Tcl, and PVS is being evaluated for analyzing key-release
   policies. Interfaces are being designed for use by
   individuals, managers, system administrators, and others
   who have responsibility or authority for information
   protection.

   The presentation will outline the objectives of the
   project, current directions and status, and the
   anticipated schedule for the project.

   For more information, see
   http://www.tis.com/docs/research/crypto/ckrpolicy.html

   _________________________________________________________

   An X.509v3 Based Public-key Infrastructure for the
   Federal Government

   William Burr, National Institute of Standards and
   Technology


   Abstract: It may well be that the Public Key
   Infrastructure (PKI) that supports commerce in the United
   States will come into being as a diverse collection of
   Certificate Authorities (CAs), with little organization
   beyond ad hoc cross-certification between some CAs. The
   executive branch of the Federal Government is an
   organization with many different agencies with very
   different missions, needs and concerns. Nevertheless, the
   government attempts to manage itself as a whole in a
   reasonably organized fashion, and provide some measure of
   centralized control, and we expect that some level of
   central control will be demanded of a Federal PKI. A
   Federal PKI Steering Committee has been organized to
   coordinate efforts to use public key digital signature
   technology. It has set up a Technical Working Group (TWG)
   to consider the technical issues associated with a
   Federal PKI.

   An assumption in this effort has been the use of standard
   X.509 certificates. The first attempts to design a large
   PKI that used earlier versions of X.509 certificates
   (i.e., Privacy Enhanced Mail, a design study done for the
   Federal Government by, Mitre and the initial version of
   the NSA Multilevel Information Systems Security
   Initiative) featured a strongly hierarchical structure,
   where the hierarchy was to be aligned with security
   policies as a vehicle for managing trust. This has proved
   confining and no truly large strictly hierarchical X.509
   based PKI has yet been implemented. The new feature of
   the X.509 v3 certificate is a number of optional
   extensions, intended to allow explicit management of
   trust and policies through the extensions contained in
   certificates.

   Does the X.509 v3 certificate give large organizations
   the tools they need? We will describe an architecture for
   a Federal PKI that the TWG has proposed, which attempts
   to use the X.509 extensions to provide an organized
   scheme for the management of trust in a Federal PKI, yet
   allows a good deal of autonomy to individual agencies and
   their CAs. This architecture does preserve some
   hierarchical elements, but also allows broad cross
   certification of Federal CAs. The architecture supports
   clients that base their trust in the public key of a
   single "root" CA, as well as those that base their trust
   from the local CA that issues their certificate.

   There are a number of assumptions in this effort that may
   not hold up, which we will consider. We assume that the
   standard is sufficiently well defined that a market for a
   number of broadly interoperable commercial products will
   develop and hope that our efforts can help to make this
   happen. We expect that the X.509 certificate will be the
   predominant vehicle for digital signatures in general
   electronic commerce.

   Perhaps more problematic, our approach assumes a
   pervasive directory service, and general acceptance of
   X.500 distinguished names. The extensions to X.509 may
   make it possible to consider more approaches centered on
   the World Wide Web as the foundation of the PKI. If this
   becomes the predominant commercial model, can we adjust
   our architecture?

   More information: http://csrc.ncsl.nist.gov/pki/

   _________________________________________________________

   The ICE-TEL Public-Key Infrastructure and Trust Model

   David W. Chadwick [1], University of Salford


   Abstract: ICE-TEL [2] is a two year project funded by the
   European Commission, to establish a public key
   certification infrastructure in Europe. The project is
   primarily driven by the needs of academic and research
   users, and several applications, including MIME, WWW and
   X.500, will use the infrastructure once it is
   established. Most EC countries are represented in the
   consortium, with the project partners being drawn from
   universities and research organisations in 13 countries.
   The project started in December 1995, and to date (Aug
   95) has produced documents that describe the ICE-TEL
   Trust Model [3], the ICE-TEL basic security policy [4],
   functional specifications for the use of X.509 V3
   certificate and V2 CRL extensions, proposed some new
   certificate management protocols, and various other
   security related documents e.g. review of Internet
   Firewalls [5] and European National Security Policies.
   The work is aligned with that of the Internet PKIX group,
   with one of the consortium members (S Farrell) being an
   editor of one of the PKIX IDs.

   This paper describes the ICE-TEL Trust Model, which is a
   merging of the PGP [6] web of trust and the X.509[7]/PEM
   [8] hierarchy of trust models. Each user has a Personal
   Security Environment (PSE) in which he stores the public
   keys that he trusts. This will always contain his own
   public key, and if he is part of a certification
   hierarchy, the public key of the CA at the top of his
   hierarchy and the public key of the CA that certified him
   (these two CAs may be the same or different CAs,
   depending upon the depth of the hierarchy). In addition,
   the user may add to his PSE the public keys of remote
   users and remote Cas that he trusts. It is a local issue
   how the PSE is protected, but self signed certificates
   are one way of securing the public keys and related
   information. It is a local issue how the public keys are
   obtained, but out of band means are recommended. All CAs
   and users within a given CA hierarchy are governed by the
   same security policy, and hence form a security domain.
   If the user operates to different levels of security i.e.
   is a member of different security domains, it is a local
   issue whether he has one PSE for each domain, or a
   combined PSE that stores the security domain/policy with
   each key (V3 certificates support the latter). Similarly,
   if a CA operates to different levels of security, it is a
   local issue whether the CA produces separate certificates
   in accordance with each policy, or one certificate
   validated to the highest security level, but also
   containing the policy OIDs of the lower security levels.
   (Issue for discussion at the workshop. Is this as secure
   or not? If not, or if it introduces other problems, then
   we can mandate that they are kept separate.)

   The term "trusted point" is used to refer to the CA at
   the top of a CA hierarchy and also to an individual user
   that is not part of a certification domain. CAs may cross
   certify other trusted points, provided that the security
   policy of a remote domain fulfills its criteria for
   trust, as detailed in its cross certification policy.
   Cross certification may be one-way or mutual (cf.
   authentication).

   Each trusted point must keep a local cache of (or pointer
   to) the list of cross certificates that it has issued.
   Each user must keep a local cache of (or pointer to) the
   certification path from its trusted point to its own
   public key certificate. (If a user is a member of
   multiple security domains then he will keep one path for
   each domain.) This aids the creation of complete
   certification paths from one user to another both within
   and between security domains.

   References

   [1] Details about the author can be found at
   http://www.salford.ac.uk/its024/chadwick.htm

   [2] Details about the ICE-TEL project can be found at
   http://www.darmstadt.gmd.de/ice-tel/

   [3] The draft ICE-TEL trust model can be found at
   http://fw4.iti.salford.ac.uk/ice-tel/trust/trust.doc

   [4] The ICE-TEL basic security policy can be found at
   http://www.darmstadt.gmd.de/ice-tel/euroca/policy.html

   [5] The Internet Firewall's report can be found at
   http://fw4.iti.salford.ac.uk/ice-tel/firewall/

   [6] Stallings, W. "Protect Your Privacy: the PGP User's
   Guide". Englewood Cliffs, NJ: Prentice-Hall, 1995. ISBN
   0-13-185596-4

   [7] "Information Technology - Open Systems
   Interconnection - The Directory - Authentication
   Framework" ISO-IEC STANDARD 9594: 1993-8 | ITU-T X.509,
   1993

   [8] Kent, S. "Privacy Enhancement for Internet Electronic
   Mail: Part II: Certificate Based Key Management", RFC
   1422, February 1993

   _________________________________________________________

   A Distributed Trust Model

   Alfarez Abdul-Rahman and Stephen Hailes, University
   College, London


   Abstract: The internet is gradually becoming a highly
   unpredictable system, with properties that may be highly
   dynamic, complex and intractable. This problem will be
   accentuated when the proliferation of ad hoc service
   providers, software agents and mobile hosts become
   commonplace. In such a network unknown entities,
   sometimes seeking a particular service from a server,
   will be frequently encountered. Therefore an effective
   method for individually ascertaining their
   trustworthiness in such a complex environment is
   essential. Here, we propose a trust model for such a
   system.

   The environment which forms the basis of our model is one
   which consists of communities of trust. Each community
   consists of 'strongly' connected components where the
   mean trust chain length between any two entities are low
   compared to the mean chain length of two components which
   lie in different communities. Each individual entity may
   have its own trust policies and makes its own decision on
   which entity, algorithm or protocol it trusts.

   The proposed model is independent from any specific
   cryptographic algorithm. This allows the model to be
   separated from the underlying implementation specific
   details, and allows entities to choose the algorithm or
   protocol it trusts most. It is also distributed in
   nature, i.e. no central certifying authority is imposed
   upon any entity. Understandably, an 'anarchical' trust
   model may rank poor in terms of trust management, but
   this is a problem which the proposed model seeks to
   provide a solution to.

   Firstly, in order to clarify the notion of trust in our
   model, we provide a trust taxonomy which parameterises
   each trust relationship. Two basic components were
   incorporated, trust categories and trust levels. The
   former specifies what aspect of trust the trust
   relationship pertains to, eg. "trust with respect to
   generating good keys". The latter specifies how much
   trust an entity places in the target trusted entity, with
   respect to a trust category. Categories and levels allows
   each trust relationship to be more precisely defined and
   provides a step towards effectively reasoning about an
   entity's trustworthiness.

   Next we propose a recommendation protocol for the
   exchange of trust related information. The
   recommendations exchanged between entities will form the
   information upon which trust towards the recommended
   entity will be evaluated if it is a previously unknown
   entity. This protocol will not be dependent on any
   predefined trust hierarchy or path, but more towards one
   entity asking another for recommendations about the
   entity whose trustworthiness is in question.

   A trust language is created to allow trust related
   information like trust categories, policies and levels to
   be effectively communicated. Due to the potential
   ambiguity of a notion in trust depending on the entity
   doing the translation of a piece of trust information, a
   hierarchical language structure is defined. This is
   essential to allow entities within different domains to
   exchange trust related information which may potentially
   contain notions which might exist in one domain but not
   the other. As the hierarchy is traversed downwards,
   further specialisation of each concept can be made.

   Specific methods for evaluating trust are not covered,
   but possible approaches are discussed.

   Further work in this research will consist of simulating
   an environment implementing this trust model, and its
   behavior will be analysed under different circumstances.
   Further possible extensions to the trust model may
   include provisions and protocols for trust monitoring and
   trust revision.

   For more information, contact
   {F.AbdulRahman,S.Hailes@cs.ucl.ac.uk}

   _________________________________________________________

   On Multiple Statements from Trusted Sources

   Raphael Yahalom, Hebrew University and MIT


   Abstract: The design of a distributed cryptographic
   protocol reflects assumptions about the characteristics
   of the environment in which the protocol will execute, as
   well as the participating entities' perspectives
   regarding these characteristics. One important such type
   of perspective is that of trust. It reflects how an
   entity perceives the characteristics of another entity
   and, in particular, the actions it expects that the other
   will perform under different circumstances.

   Trust can be classified into different primitive trust
   classes. Each trust aspect is associated with different
   characteristics of the trusted entity. Consequently,
   trust of one entity in another in one aspect does not
   necessarily imply trust of the first entity in the second
   in any other aspect.

   In protocols in realistic wide-area-network settings, an
   entity may receive multiple statements from multiple
   sources. Each source may be associated with different
   trust characteristics. The recipient needs to draw
   consistent conclusions given these multiple statements. A
   reasoning framework that guarantees valid deductions in
   the face of multiple statements is presented. We
   demonstrate aspects of that framework in the context of
   example protocols.

   For more information, contact
   yahalom@pluto.mscc.huji.ac.il.

   _________________________________________________________

   Off-line Delegation in a Distributed File Repository

   Arne Helme and Tage Stabell-Kulo, University of Twente
   and University of Troms


   Abstract: We are developing a minimal syntax with
   semantics for delegation tokens in a distributed file
   repository, and show how this can be exploited in the
   implementation of one-time access rights. Delegations can
   occur off-line, and we investigate the practical
   implications of this.

   When small, personal computers are integrated into
   systems, users will store secrets in them, in particular,
   users will store encryption keys that they can use to
   access remote services. We investigate how to delegate
   authority using those secrets when the user does not have
   access to a computer network. For example, how can one
   issue a delegation token while speaking in the phone; it
   is highly impractical to dictate several hundred
   hexadecimal digits.

   The syntax should be compact, the semantics unambiguously
   reconstructed from the compact form, while not
   compromising security. These properties are essential to
   off-line delegation. In other words, how much can the
   syntax of delegation tokens be relaxed in order to
   minimize message size, and how can the contents be
   encoded, without compromising security.

   Furthermore, there is a tradeoff between the contents of
   messages (size) and the knowledge the receiver an deduce
   from the contents, that is, which messages the receiver
   can construct based on a compact message. The ability to
   see the contents of a message is not sufficient, since a
   compact message does not have meaning without knowledge
   about its semantics.

   The term delegation as we used it above covers any
   setting where there is a transaction between two parties,
   where one trusts the other, and a third party possessing
   the assets in question. For example, Alice meets Bob and
   issues a (compact) token to him. He builds a certificate
   that he presents to the file server (his bank) to obtain
   a file (money). Bob trusts Alice to have access to the
   file (money in the bank), and the file server trusts
   Alice's signature, but not Bob. In other words: we have a
   non-trusted third party.

   We are using a distributed file repository (with trusted
   servers) as our research vehicle. In our setting, we
   issue and forward delegation tokens (without using a
   computer network) to other users, enabling once-only
   access to the repository for a specific file. The token,
   together with information supplied by the user, together
   forms a delegation certificate. This certificate will
   authorize the intended recipient to a one-time access of
   the file in question (even though the owner may hold a
   lock on it). Since the certificate will be created in a
   compact form, it is necessary that the recipient of the
   delegation token have some a~priori knowledge.

   The secret context can, for example, be a password and/or
   a timestamp. This type of information is much easier to
   convey between humans than nounces and digital
   signatures. The token resembles a capability since it can
   not be verified or used without context.

   We believe that blending once-only semantics with compact
   certificates will give us an environment well suited for
   the coming generation infrastructure, built around
   portable, personal machines.

   A prototype of the file repository is available and
   provides a distributed file storage. We are experimenting
   with different ways to implement once-only semantics in
   such a distributed system. and how to make the compact
   tokens.

   For more information, contact {arne,tage}@acm.org

   _________________________________________________________

   Operational Tradeoffs of Aggregating Attributes in
   Digital Certificates

   Ian Simpson, Carnegie Mellon University


   Abstract: There are many circumstances in which it is
   necessary to vouch for a collection of attributes about a
   subscriber to a certificate service. One well-used
   example is the drivers license, which conveys a
   purpose-built "bundle" of bearer attributes. But there
   are circumstances in which using such a pre-designated
   "bundle" may have disadvantages. In some cases,
   representing attributes independently may be more
   appropriate.

   One may address the need for grouping attributes with a
   coarse grained approach, in which several attributes are
   aggregated into a single certificate, or a fine grained
   approach, in which only a single attribute is contained
   in each certificate. Assuming aggregation is chosen,
   there are two points at which it can be conducted: at the
   time of the transaction (and in response to the needs of
   the recipient), or at some earlier time (given that
   commonly required groupings can be anticipated).

   The specific options that are chosen can significantly
   affect the operation of the CA infrastructure. There are
   tradeoffs involved in choosing one approach or the other.
   Under what conditions does it make sense to aggregate
   attributes into a single certificate, and under what
   conditions does it make sense to keep them separate? This
   talk will discuss the effects of these choices on a
   number of operational issues:

   - Efficiency

      - What is the effect on computational and networking
      requirements?

   - Security and reliability

      - What is the rate at which the invalidation of
      attributes will result in invalid certificates?

      - What is the malefactor's "payoff" in the case of
      certificate compromise?

   - Privacy and information disclosure

      - Under what circumstances might additional
      information be inadvertently "leaked" as a result of
      using the certificates?

   - Administration

      - How complex is the task of maintaining subscribers'
      certificates?

      - How often must certificates be re-issued under each
      scheme?

   - Liability and incentives for participation

      - In the case of aggregation, what's the liability for
      the aggregator?

      - Why would anyone want to act as aggregator, anyway?

   More information: is2a+@andrew.cmu.edu

   _________________________________________________________

   Trust Management for Mobile Agents

   William M. Farmer, Joshua D. Guttman, and Vipin Swarup,
   MITRE


   Abstract: Currently, distributed systems employ models in
   which processes are statically attached to hosts.
   Threats, vulnerabilities, and countermeasures for these
   systems have been studied extensively and sophisticated
   security architectures have been designed. Mobile agent
   technology extends this model by including mobile
   processes, i.e., processes which can autonomously migrate
   to new hosts. Although numerous benefits are expected,
   this extension results in new security threats from
   malicious agents and hosts [1]. A primary added
   complication is this: As an agent traverses multiple
   machines that are trusted to different degrees, its state
   can change in ways that adversely impact its
   functionality.

   We are developing a mobile agent security architecture
   [2] that extends an existing distributed system security
   architecture with special mechanisms that provide
   security in the presence of migrating stateful agents.
   The basic principals of this architecture are authors of
   programs, the programs themselves, senders of agents, the
   agents themselves, and interpreters that execute agents.
   Crucial events in an agent's life are the creation of the
   underlying program, creation of the agent, migration of
   the agent to a new execution site, remote procedure
   calls, and termination of the agent. These events cause
   complex trust relationships between principals, e.g., the
   trust placed by authors and senders in agents, the trust
   placed by an agent in the interpreters that execute it,
   and the trust placed by an interpreter in the agents it
   is executing. When an agent requests an operation on a
   resource, the interpreter uses its access rules and these
   trust relationships to derive authorization for the
   request.

   We have used the theory of authentication of Lampson et
   al [3] to formalize the trust relationships in a generic
   mobile agent system and are designing our security
   architecture based on this work. For instance, a
   fundamental invariant in our system is that an
   interpreter "speaks for" the agents it is executing. Thus
   an agent must trust the interpreters that execute it.
   Trust is managed by controlling the principals under
   which the agent executes as it migrates between
   interpreters. Agent creation and migration can use either
   handoff or delegation semantics and the protocols ensure
   that the above invariant is maintained.

   A novel aspect of our architecture is a "state appraisal"
   mechanism that protects against attacks via agent state
   modification and that enables an agent's privilege to be
   dependent on its current state. Checking the integrity of
   an agent' s state is difficult since the state can change
   during execution and hence cannot be signed. Our agents
   carry a state appraisal function that checks whether the
   agent's state meets expected state invariants; the
   function returns a set of permits based on the agent's
   current state.

   Our emphasis is on agents written by known software
   developers and our architecture seeks to protect mobile
   agent applications, their users, and the hosts that
   support them. As a concrete application of our
   techniques, we are securing an intrusion protection
   system that we are implementing using mobile agents
   ("cybercops").

   [1] "Security for Mobile Agents: Issues and
   Requirements", William M. Farmer, Joshua D. Guttman, and
   Vipin Swarup; To appear in the Proceedings of the
   National Information Systems Security Conference (NISSC),
   October 1996.

   [2] "Security for Mobile Agents: Authentication and State
   Appraisal", William M. Farmer, Joshua D. Guttman, and
   Vipin Swarup; To appear in the Proceedings of the
   European Symposium on Research in Computer Security
   (ESORICS), September 1996.

   [3] "Authentication in Distributed Systems: Theory and
   Practice", Butler Lampson, Martin Abadi, Michael Burrows.
   and Edward Wobber; ACM Transactions on Computer Systems,
   10(4), pp 265-310, Nov 1992.

   For more information. contact swarup@mitre.org.

   _________________________________________________________

   Trust Management in ERLink

   Samuel I. Schaen, Mitre


   Abstract: Mitre supports the National Communications
   System (NCS) on a project called Emergency Response Link
   (ERLink). In emergency response and disaster relief
   situations, the NCS is primarily responsible for ensuring
   reliable communications. NCS is fielding a pilot to
   demonstrate the usefulness of web technology and data
   communications during emergencies and preparation for
   emergencies.

   The project intends to use the public key and secure
   sockets layer technologies to help provide
   confidentiality, authentication, and access control. The
   potential is for use by all levels of government from the
   Federal government down to state and local disaster
   relief organizations, including local police, fire and
   rescue organizations. Thus there is potential to
   eventually have as many as 3 million users and a need for
   a modicum of security on a shoestring budget.

   This talk will describe the trust management challenges
   presented by the ERLink requirements.

   For more information, contact: schaen@mitre.org

   _________________________________________________________

   Linking trust with network reliability

   Yvo Desmedt and Mike Burmester, University of Wisconsin
   at Milwaukee and Royal Holloway College


   Abstract: The goal of this paper is to link at an
   abstract level the rather new problem of trust with the
   older problem of network reliability. This link will show
   that some of the alternatives to BAN logic are related
   (not identical) to well known approaches in network
   reliability.

   In our analogy, we view the entities involved as nodes
   (vertices) in a (possibly directed) graph and a secure
   link between two entities is expressed as a (possibly
   labeled) edge (arc) between them. A secure link may
   correspond to a physically secure link or to a shared key
   obtained without the help of a third party. We call the
   graph the "security graph".

   As in network reliability we will discuss two models for
   trust, a deterministic one and a probabilistic one. To
   start discussing our viewpoint about trust we start with
   an example. Assume that the entities are {A,B,C,D} and
   that all pairs of entities, but (A,D), have received each
   other's public key without the help of a third party.
   This means that D has full trust in B RELATIVE to the
   authenticity of B's public key and a similar relative
   trust exists between D and C. However when D wants to
   obtain a certificate of A's public key then the relative
   trusts in B and C does not imply that B and C can be
   trusted relative to this certificate. In our
   deterministic model, we define trust relative to two
   entities as being a Boolean expression induced from the
   security graph, i.e. B OR C. This means that if D wants
   to obtain A's public key, D needs to trust that B and C
   will never be both dishonest.

   To define, in general, relative trust for our
   deterministic model we say that the trust in a path is 1
   if the path has length one, else it is the logical AND of
   all the intermediate nodes. Trust between n_i and n_j is
   the logical OR of all the trust in simple paths between
   n_i and n_j.

   We now discuss some of the results this model implies.
   First, the BAN principle of transitivity of trust implies
   that a single path between sender and receiver is used to
   obtain the public key (of the sender or of the receiver).
   Our trust model says that if one of the nodes in this
   single path becomes corrupt, the security vanishes, as
   has been observed before by several researchers.
   Secondly, if multiple paths exist between entities it is
   better to use these simultaneously. For authenticity this
   means sending the same information in parallel. These
   facts can easily be formalized using our probabilistic
   model.

   In our probabilistic model we give to each vertex in the
   security graph a probability of being trustworthy. If
   these probabilities are all independent, the Boolean
   expression transforms easily into a probability. Note
   that in this model this probability will always be 1 if
   (n_i,n_j) is an edge. Note that in Beth-Borcherding-Klein
   (ESORICS '94) it has been observed that trust between two
   entities should decrease when these are farther apart
   (see also PGP for a more primitive variant of this
   approach). This goal is naturally achieved in our model.

   If the model is not a probabilistic, but threshold one
   (i.e. there are at maximum u-1 dishonest entities), then
   it makes only sense to use disjoint paths simultaneously
   and u-connected graphs are the solution.

   An alternative probabilistic model in which we allocate
   the probability to an edge and not to a vertex is also
   well known in network reliability and its
   advantages/disadvantages in the context of trust will be
   briefly discussed.

   For PRIVACY simultaneous use of paths implies using the
   one-time path for each OR in the Boolean expression (more
   generally using threshold schemes).

   For more information, contact desmedt@cs.uwm.edu.

   _________________________________________________________

   Trust Management Under Law-Governed Interaction

   Naftaly H. Minsky and Victoria Ungureanu, Rutgers
   University


   Abstract: Modern distributed systems tend to be
   conglomerates of heterogeneous subsystems which have been
   designed separately, by different people, with little, if
   any knowledge of each other --- and which may be subject
   to different security policies. A single software agent
   operating within such a system may find itself
   interacting with, or even belonging to, several
   subsystems, and thus be subject to several disparate
   security policies. For example, an agent may be
   classified at a certain military-like security level,
   which effects the kind of document it can get; it may
   carry certain "capabilities" meant to provide him with
   certain access rights; and, while accessing certain
   financial information, it may be subject to the "Chinese
   Wall" security policy, under which one's access right
   depends on access history. If every such policy is
   expressed by means of a different formalism and enforced
   with a different mechanism, the situations can get easily
   out of hand.

   We propose to deal with this problem by means of a
   recently developed security mechanism for distributed
   systems called Law-Governed Interaction (LGI). LGI can
   associate a singular mode of interaction with any given
   group of distributed agents, subjecting all such
   interactions to an explicitly specified "law," that
   defines the security policy regarding this mode of
   interaction. An agent operating under a given law L can
   be trusted implicitly to satisfy the policy defined by
   this law, without having to validate each operation with
   some trusted server. This makes LGI scalable to a
   significant extend, and it contributes to the fault
   tolerance of this mechanism.

   LGI can thus support a wide range of security models and
   policies, including: conventional discretionary models
   that use capabilities and access-control lists, mandatory
   lattice-based access control models, and the more
   sophisticated models and policies required for commercial
   applications. Moreover, under LGI, a single agent may be
   involved in several different modes of interactions, and
   thus be subject to several disparate security policies.
   All such policies would be defined by laws expressed in a
   single formalism, and he enforced in a unified manner.

   Another advantage of the proposed security mechanism is
   that it completely hides from the users all aspects of
   key management. The trust between the interacting agents
   under LGI is the result of constraints being imposed on
   the exchange of messages between them.

   For more information, see
   http://athos.rutgers.edu/~minsky.

   _________________________________________________________

   Tools for Security Policy Definition and Implementation

   P. Humenn, BlackWatch Technology, Inc.


   Abstract: Assurance in complex distributed systems goes
   beyond the encryption and signature verification
   paradigm. Comprehensive tools are needed to define and
   manage diverse set of security policies.

   The current solutions for security consist of single
   sign-on technology, file and directory permissions,
   user/group based access control lists (ACLs), and
   encryption/verification systems, such as Kerberos.
   Security policies, on the other hand, are more
   complicated.

   In an effort to implement and enforce complicated
   security policies currently, the system or network
   administrator attempts to configure the primitives
   comprehensively. Then, the administrator must maintain
   the configuration and its integrity. Experience has shown
   that policies that depend administration of these
   primitives by hand are prone to error and result in
   either the system grinding to a halt, or are prone to
   attacks.

   Complicated distributed applications need to describe
   security policy beyond the notion of file permissions and
   encryption. Such a policy might be, "An XYZ Trader is
   only allowed to make transactions under $10K and only
   between 9 a.m. and 3 p.m." Another might be, "Only clear
   messages should flow through the local network. Messages
   from the outside gateway should be encrypted/decrypted at
   that gateway." Such policies raise the issue security
   policy definition and enforcement to the level of the
   application.

   Tools are needed to give not only administrators a way to
   define and manage security policy, but also for
   application developers to define flexible security
   policies directly into their applications. This takes an
   active stance on security rather than the familiar
   patchwork reactive approach. We present methods and tool
   interfaces for describing policy and administering it
   within an object based client/server paradigm.

   For more information, contact polar@blackwatch.com.

   _________________________________________________________

   [End of DIMACS abstracts]








