Note: for index of full report see: http://jya.com/nrcindex.htm

---------

[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                              2

                        Cryptography:
              Roles, Market, and Infrastructure


   Cryptography is a technology that can play important roles
in addressing certain types of information vulnerability,
although it is not sufficient to deal with all threats to
information security. As a technology, cryptography is
embedded into products that are purchased by a large number of
users; thus, it is important to examine various aspects of the
market for cryptography. Chapter 2 describes cryptography as
a technology used in products, as a product within a larger
market context, and with reference to the infrastructure
needed to support its large-scale use.


                 2.1 CRYPTOGRAPHY IN CONTEXT

   Computer-system security, and its extension network
security, are intended to achieve many purposes. Among them
are safeguarding physical assets from damage or destruction
and ensuring that resources such as computer time, network
connections, and access to databases are available only to
individuals -- or to other systems or even software processes
-- authorized to have them.(1) Overall information security is
dependent on many factors, including various technical
safeguards, trustworthy and capable personnel, high degrees of
physical security, competent administrative oversight, and
good operational procedures. Of the available technical
safeguards, cryptography has been one of the least utilized to
date.(2)

   In general, the many security safeguards in a system or
network not only fulfill their principal task but also act
collectively to mutually protect one another. In particular,
the protection or operational functionality that can be
afforded by the various cryptographic safeguards treated in
this report will inevitably require that the hardware or
software in question be embedded in a secure environment. To
do otherwise is to risk that the cryptography might be
circumvented, subverted, or misused -- hence leading to a
weakening or collapse of its intended protection.

   As individual stand-alone computer systems have been
incorporated into ever larger networks (e.g., local-area
networks, wide-area networks, the Internet), the requirements
for cryptographic safeguards have also increased. For example,
users of the earliest computer systems were almost always
clustered in one place and could be personally recognized as
authorized individuals, and communications associated with a
computer system usually were contained within a single
building. Today, users of computer systems can be connected
with one another worldwide, through the public switched
telecommunications network, a local area network, satellites,
microwave towers, and radio transmitters. Operationally, an
individual or a software process in one place can request
service from a system or a software process in a far distant
place. Connectivity among systems is impromptu and occurs on
demand; the Internet has demonstrated how to achieve it. Thus,
it is now imperative for users and systems to identify
themselves to one another with a high degree of certainty and
for distant systems to know with certainty what privileges for
accessing databases or software processes a remote request
brings. Protection that could once be obtained by geographic
propinquity and personal recognition of users must now be
provided electronically and with extremely high levels of
certainty.

----------

   (1)  The terms "information security" or shortened versions
such as INFOSEC, COMPSEC, and NETSEC are also in use.

   (2)  Other safeguards, in particular software safeguards,
are addressed in various standard texts and reports. See, for
example, National Institute of Standards and Technology, *An
Introduction to Computer Security*, NIST Special Publication
800-12, Department of Commerce, October 1995; *Trusted
Computer System Evaluation Criteria*, Department of Defense,
August 15, 1983; Computer Science and Telecommunications Board
(CSTB), National Research Council, *Computers at Risk: Safe
Computing in the Information Age*, National Academy Press,
Washington, D.C., 1991.

____________________________________________________________


        2.2 WHAT IS CRYPTOGRAPHY AND WHAT CAN IT DO?

   The word "cryptography" is derived from Greek words that
mean secret writing. Historically, cryptography has been used
to hide information from access by unauthorized parties,
especially during communications when it would be most
vulnerable to interception. By preserving the secrecy, or
confidentiality, of information, cryptography has played a
very important role over the centuries in military and
national affairs.(3)

   In the traditional application of cryptography for
confidentiality, an originator (the first party) creates a
message intended for a recipient (the second party), protects
(encrypts) it by a cryptographic process, and transmits it as
ciphertext. The receiving party decrypls the received
ciphertext message to reveal its true content, the plaintext.
Anyone else (the third party) who wishes undetected and
unauthorized access to the message must penetrate (by
cryptanalysis) the protection afforded by the cryptographic
process.

   In the classical use of cryptography to protect
communications, it is necessary that both the originator and
recipient(s) have common knowledge of the cryptographic
process (the algorithm or cryptographic algorithm) and that
both share a secret common element -- typically, the key or
cryptographic key, which is a piece of information, not a
material object. In the encryption process, the algorithm
transforms the plaintext into the ciphertext, using a
particular key, the use of a different key results in a
different ciphertext. In the decryption process, the algorithm
transforms the ciphertext into the plaintext, using the key
that was used to encrypt (4) the original plaintext. Such a
scheme, in which both communicating parties must have a common
key, is now called *symmetric cryptography* or *secret-key
cryptography*; it is the kind that has been used for centuries
and written about widely.(5) It has the property, usually an
operational disadvantage, of requiring a safe method of
distributing keys to relevant parties (*key distribution* or
*key management*).

   It can be awkward to arrange for symmetric and secret keys
to be available to all parties with whom one might wish to
communicate, especially when the list of parties is large.
However, a scheme called *asymmetric cryptography* (or,
equivalently, *public-key cryptography*), developed in the
mid-1970s, helps to mitigate many of these difficulties
through the use of different keys for encryption and
decryption.(6) Each participant actually has two keys. The
public key is published, is freely available to anyone, and is
used for encryption; the private key is held in secrecy by the
user and is used for decryption.(7) Because the two keys are
inverses, knowledge of the public key enables the derivation
of the private key in theory. However, in a well-designed
public-key system, it is computationally infeasible in any
reasonable length of time to derive the private key from
knowledge of the public key.

   A significant operational difference between symmetric and
asymmetric cryptography is that with asymmetric cryptography
anyone who knows a given person's public key can send a secure
message to that person. With symmetric cryptography, only a
selected set of people (those who know the private key) can
communicate. While it is not mathematically provable, all
known asymmetric cryptographic systems are slower than their
symmetric cryptographic counterparts, and the more public
nature of asymmetric systems lends credence to the belief that
this will always be true. Generally, symmetric cryptography is
used when a large amount of data needs to be encrypted or when
the encryption must be done within a given time period;
asymmetric cryptography is used for short messages, for
example, to protect key distribution for a symmetric
cryptographic system.

   Regardless of the particular approach taken, the
applications of cryptography have gone beyond its historical
roots as secret writing; today, cryptography serves as a
powerful tool in support of system security. Cryptography can
provide many useful capabilities:

   +    *Confidentiality* -- the characteristic that
information is protected from being viewed in transit during
communications and/or when stored in an information system.
With cryptographically provided confidentiality, encrypted
information can fall into the hands of someone not authorized
to view it without being compromised. It is almost entirely
the confidentiality aspect of cryptography that has posed
public policy dilemmas.

   The other capabilities, described below, can be considered
collectively as nonconfidentiality or collateral uses of
cryptography:

   +    *Authentication* -- cryptographically based assurance
that an asserted identity is valid for a given person (or
computer system). With such assurance, it is difficult for an
unauthorized party to impersonate an authorized one.

   +    *Integrity check* -- cryptographically based assurance
that a message or computer file has not been tampered with or
altered.(8) With such assurance, it is difficult for an
unauthorized party to alter data.

   +    *Digital signature* -- cryptographically based
assurance that a message or file was sent or created by a
given person. A digital signature cryptographically binds the
identity of a person with the contents of the message or file,
thus providing nonrepudiation -- the inability to deny the
authenticity of the message or file. The capability for
nonrepudiation results from encrypting the digest (or the
message or file itself) with the private key of the signer.
Anyone can verify the signature of the message or file by
decrypting the signature using the public key of the sender.
Since only the sender should know his or her own private key,
assurance is provided that the signature is valid and the
sender cannot later repudiate the message. If a person
divulges his or her private key to any other party, that party
can impersonate the person in all electronic transactions.

   +    *Digital date/time stamp* -- cryptographically based
assurance that a message or file was sent or created at a
given date and time. Generally, such assurance is provided by
an authoritative organization that appends a date/time stamp
and digitally signs the message or file.

   These cryptographic capabilities can be used in
complementary ways. For example, authentication is basic to
controlling access to system or network resources. For
example, a person may use a password to authenticate his own
identity; only when the proper password has been entered will
the system allow the user to "log on" and obtain access to
files, email, and so on.(9) But passwords have many
limitations as an access control measure (e.g., people tell
others their passwords or a password is learned via
eavesdropping), and cryptographic authentication techniques
can provide much better and more effective mechanisms for
limiting system or resource access to authorized parties.

   Access controls can be applied at many different points
within a system. For example, the use of a dial-in port on an
information system or network can require the use of
cryptographic access controls to ensure that only the proper
parties can use the system or network at all. Many systems and
networks accord privileges or access to resources depending on
the specific identity of a user; thus, a hospital information
system may grant physicians access that allows entering orders
for patient treatment, whereas laboratory technicians may not
have such access. Authentication mechanisms can also be used
to generate an audit trail identifying those who have accessed
particular data, thus facilitating a search for those known to
have compromised confidential data.

   In the event that access controls are successfully
bypassed, the use of encryption on data stored and
communicated in a system provides an extra layer of
protection. Specifically, if an intruder is denied easy access
to stored files and communications, he may well find it much
more difficult to understand the internal workings of the
system and thus be less capable of causing damage or reading
the contents of encrypted inactive data files that may hold
sensitive information. Of course, when an application opens a
data file for processing, that data is necessarily unencrypted
and is vulnerable to an intruder that might be present at that
time.

   Authentication and access control can also help to protect
the privacy of data stored on a system or network. For
example, a particular database application storing data files
in a specific format could allow its users to view those
files. If the access control mechanisms are set up in such a
way that only certain parties can access that particular
database application, access to the database files in question
can be limited, and thus the privacy of data stored in those
databases protected. On the other hand, an unauthorized user
may be able to obtain access to those files through a
different, uncontrolled application, or even through the
operating system itself. Thus, encryption of those files is
necessary to protect them against such "back-door" access.(10)

   The various cryptographic capabilities described above may
be used within a system in order to accomplish a set of tasks.
For example, a banking system may require confidentiality and
integrity assurances on its communications links,
authentication assurances for all major processing functions,
and integrity and authentication assurances for high-value
transactions. On the other hand, merchants may need only
digital signatures and date/time stamps when dealing with
external customers or cooperating banks when establishing
contracts. Furthermore, depending on the type of capability to
be provided, the underlying cryptographic algorithms may or
may not be different.

   Finally, when considering what cryptography can do, it is
worth making two practical observations. First, the initial
deployment of any technology often brings out unanticipated
problems, simply because the products and artifacts embodying
that technology have not had the benefit of successive cycles
of failure and repair. Similarly, human procedures and
practices have not been tested against the demands of
real-life experience. Cryptography is unlikely to be any
different, and so it is probable that early large-scale
deployments of cryptography will exhibit exploitable
vulnerabilities.(11)

   The second point is that against a determined opponent that
is highly motivated to gain unauthorized access to data, the
use of cryptography may well simply lead that opponent to
exploit some other vulnerability in the system or network on
which the relevant data is communicated or stored, and such an
exploitation may well be successful. But the use of
cryptography can help to raise the cost of gaining improper
access to data and may prevent a resource-poor opponent from
being successful at all.

   More discussion of cryptography can be found in Appendix C.

----------

   (3)  The classic work on the history of cryptography is
David Kahn, *The Codebreakers*, MacMillan, New York, 1967.

   (4)  This report uses the term "encrypt" to describe the
act of using an encryption algorithm with a given key to
transform one block of data, usually plaintext, into another
block, usually ciphertext.

   (5)  Historical perspective is provided in David Kahn,
*Kahn on Codes*, MacMillan, New York, 1983; F.W. Winterbotham,
*The Ultra Secret*, Harper & Row, New York, 1974; and Ronald
Lewin, *Ultra Goes to War*, Hutchinson & Co., London, 1978. A
classic reference on the fundamentals of cryptography is
Dorothy Denning, *Cryptography and Data Security*,
Addison-Wesley, Reading, Mass., 1982.

   (6)  Gustavus J. Simmons (ed.), *Contemporary Cryptology.
The Science of Information Integrity*, IEEE Press, Piscataway,
New Jersey, 1992; Whitfield Diffie, "The First Ten Years of
Public-Key Cryptography," *Proceedings of the IEEE*, Vol. 76,
1988, pp. 560-577.

   (7)  The seminal paper on public-key cryptography is
Whitfield Diffie and Martin Hellman, "New Directions in
Cryptography," *IEEE Transactions on Information Theory*,
Volume IT-22, 1976, pp. 644-654.

   (8)  Digital signatures and integrity checks use a
condensed form of a message or file -- called a digest --
which is created by passing the message or file through a
one-way hash function. The digest is of fixed length and is
independent of the size of the message or file. The hash
function is designed to make it highly unlikely that different
messages (or files) will yield the same digest, and to make it
computationally very difficult to modify a message (or file)
but retain the same digest.

   (9)  An example more familiar to many is that the entry of
an appropriate personal identification number into an
automatic teller machine (ATM) gives the ATM user access to
account balances or cash.

   (10) The measure-countermeasure game can continue
indefinitely. In response to file encryption, an intruder can
insert into an operating system a Trojan horse program that
waits for an authorized user to access the encrypted database.
Since the user is authorized, the database will allow the
decryption of the relevant file and the intruder can simply
"piggy-back" on that decryption. Thus, those responsible for
system security must provide a way to check for Trojan horses,
and so the battle goes round.

   (11) For a discussion of this point, see Ross Anderson,
"Why Cryptosystems Fail," *Communications of the ACM*, Volume
37(11), November, 1994, pp. 32-40.

____________________________________________________________


   2.3 HOW CRYPTOGRAPHY FITS INTO THE BIG SECURITY PICTURE

   In the context of confidentiality, the essence of
information security is a battle between information
protectors and information interceptors. Protectors -- who may
be motivated by "good" reasons (if they are legitimate
businesses) or "bad" reasons (if they are criminals) -- wish
to restrict access to information to a group that they select.
Interceptors -- who may also be motivated by "bad" reasons (if
they are unethical business competitors) or "good" reasons (if
they are law enforcement agents investigating serious crimes)
-- wish to obtain access to the information being protected
whether or not they have the permission of the information
protectors. It is this dilemma that is at the heart of the
public policy controversy and is addressed in greater detail
in Chapter 3.

   From the perspective of the information interceptor,
encryption is only one of the problems to be faced. In
general, the complexity of today's information systems poses
many technical barriers (Section 2.3.1). On the other hand,
the information interceptor may be able to exploit product
features or specialized techniques to gain access (Section
2.3.2).


             2.3.1 Technical Factors Inhibiting
                 Access to Information (12)

   Compared to the task of tapping an analog telephone line,
obtaining access to the content of a digital information
stream can be quite difficult. With analog "listening"
(traditional telephony or radio interception), the technical
challenge is obtaining access to the communications channel.
When communications are digitized, gaining access to the
charmel is only the first step: one must then unravel the
digital format, a task that can be computationally very
complex. Furthermore, the complexity of the digital format
tends to increases over time, because more advanced
information technology generally implies increased
functionality and a need for more efficient use of available
communications capacity.

   Increased complexity is reflected in particular in the
interpretation of the digital stream that two systems might
use to communicate with each other or the format of a file
that a system might use to store data. Consider, for example,
one particular sequence of actions used to cormnunicate
information. The original application in the sending system
might have started with a plaintext message, and then
compressed it (to make it smaller); encrypted it (to conceal
its meaning); and appended error-control bits to the
compressed, encrypted message (to prevent errors from creeping
in during transmission).(13) Thus, a party attempting to
intercept a communication between the sender and the receiver
could be faced with a data stream that would represent the
combined output of many different operations that transform
the data stream in some way. The interceptor would have to
know the error-control scheme and the decompression algorithms
as well as the key and the algorithm used to encrypt the
message.

   When an interceptor moves onto the lines that carry bulk
traffic, isolating the bits associated with a particular
communication of interest is itself quite difficult.(14) A
high-bandwidth line (e.g., a long-haul fiber-optic cable)
typically carries hundreds or thousands of different
communications; any given message may be broken into distinct
packets and intermingled with other packets from other
contemporaneously operating applications.(15) The traffic on
the line may be encrypted "in bulk" by the line provider, thus
providing an additional layer of protection against the
interceptor. Moreover, since a message traveling from point A
to point B may well be broken into packets that traverse
different physical paths en route, an interceptor at any given
point in between A and B may not even see all of the packets
pass by.

   Another factor inhibiting access to information is the use
of technologies that facilitate anonymous communications. For
the most part, intercepted communications are worthless if the
identity of the communicating parties is not known. In
telephony, call forwarding and pager callbacks from pay
telephones have sometimes frustrated the efforts of law
enforcement officials conducting wiretaps. In data
communications, so-called anonymous remailers can strip out
all identifying information from an Internet e-mail message
sent from person A to person B in such a way that person B
does not know the identity of person A. Some remailers even
support return communications from person B to person A
without the need for person B to know the identity of person
A.

   Access is made more difficult because an information
protector can switch communications from one medium to another
very easily without changing end-user equipment. Some forms of
media may be easily accessed by an interceptor (e.g.
conventional radio), whereas other forms may be much more
challenging (e.g. fiber-optic cable, spread-spectrum radio).
The proliferation of different media that can interoperate
smoothly even at the device level will continue to complicate
the interceptor's attempts to gain access to communications.

   Finally, obtaining access also becomes more difficult as
the number of service providers increases (Box 2.1). In the
days when AT&T held a monopoly on voice communications and
criminal communications could generally be assumed to be
carried on AT&T-operated lines, law enforcement and national
security authorities needed only one point of contact with
whom to work. As the telecommunications industry becomes
increasingly heterogenous, law enforcement authorities may
well be uncertain about what company to approach about
implementing a wiretap request.

----------

   (12) This section addresses technical factors that inhibit
access to information. But technical measures are only one
class of techniques that can be used to improve information
security. For example, statutory measures can help contribute
to information security. Laws that impose criminal penalties
for unauthorized access to computer systems have been used to
prosecute intruders. Such laws are intended to deter attacks
on information systems, and to the extent that individuals do
not exhibit such behavior, system security is enhanced.

   (13) Error control is a technique used both to detect
errors in transmission and sometimes to correct them as well.

   (14) This point is made independently in a report that came
to the attention of the committee as this report was going to
press. A staff study of the Permanent Select Committee on
Intelligence, House of Representatives concluded that "the
ability to filter through the huge volumes of data and to
extract the information from the layers of formatting,
multiplexing, compression, and transmission protocols applied
to each message is the biggest challenge of the future,
[while] increasing amounts and sophisitication of encryption
add another layer of complexity." *IC21 Intelligence Community
in the 21st Century*, p. 121.

   (15) Paul Haskell and David G. Messerschmitt, "In Favor of
an Enhanced Network Interface for Multimedia Services,"
submitted to *IEEE Multimedia Magazine*.

____________________________________________________________


      2.3.2 Factors Facilitating Access to Information


System or Product Design

   Unauthorized access to protected information can
inadvertently be facilitated by product or system features
that are intended to provide legitimate access but instead
create unintentional loopholes or weaknesses that can be
exploited by an interceptor. Such points of access that may be
deliberately incorporated into product or system designs
include the following:

   +    *Maintenance and monitoring ports*.(16) For example,
many telephone switches and computer systems have dial-in
ports that are intended to facilitate monitoring and remote
maintenance and repair by off-site technicians.

   +    *Master keys*. A product can have a single master key
that allows its possessor to decrypt all ciphertext produced
by the product.

   +    *Mechanisms for key escrow or key backup*. A third
party, for example, may store an extra copy of a private key
or a master key. Under appropriate circumstances, the third
party releases the key to the appropriate individual(s), who
is (are) then able to decrypt the ciphertext in question. This
subject is discussed at length in Chapter 5.

   +    *Weak encryption defaults*. A product capable of
providing very strong encryption may be designed in such a way
that users invoke those capabilities only infrequently. For
example, encryption on a secure telephone may be designed so
that the use of encryption depends on the user pressing a
button at the start of a telephone call. The requirement to
press a button to invoke encryption is an example of a weak
default, because the telephone could be designed so that
encryption is invoked automatically when a call is initiated;
when weak defaults are designed into systems, many users will
forget to press the button.

   Despite the good reasons for designing systems and products
with these various points of access (e.g., facilitating remote
access through maintenance ports to eliminate travel costs of
system engineers), any such point of access can be exploited
by unauthorized individuals as well.


Methods Facilitating Access to Information

   Surreptitious access to communications can also be gained
by methods such as the following:

   +    *Interception in the ether*. Many point-to-point
communications make use of a wireless (usually radio) link at
some point in the process. Since it is impossible to ensure
that a radio broadcast reaches only its intended receiver(s),
communications carried over wireless links -- such as those
involving cellular telephones and personal pagers -- are
vulnerable to interception by unauthorized parties.

   +    *Use of pen registers*. Telephone communications
involve both the content of a call and call-setup information
such as numbers called, originating number, time and length of
call and so on. Setup information is often easily accessible,
some of it even to end users.

   +    *Wiretapping*. To obtain the contents of a call
carried exclusively by nonwireless means, the information
carried on a circuit (actually, a replica of the information)
is sent to a monitoring station. A call can be wiretapped when
an eavesdropper picks up an extension on the same line, hooks
up a pair of alligator clips to the right set of terminals, or
obtains the cooperation of telephone company officials in
monitoring a given call at a chosen location.

   +    *Exploitation of related data*. A great deal of useful
information can be obtained by examining in detail a digital
stream that is associated with a given communication. For
example, people have developed communications protocol
analyzers that examine traffic as it flows by a given point
for passwords and other sensitive information.

   +    *Reverse engineering*. Decompilation or disassembly of
software can yield deep understanding of how that software
works. One implication is that any algorithm built into
software cannot be assumed to be secret for very long, since
disassembly of the software will inevitably reveal it to a
technically trained individual.

   +    *Cryptanalysis* (discussed in greater detail in
Appendix C). Cryptanalysis is the task of recovering the
plaintext corresponding to a given ciphertext without
knowledge of the decrypting key. Successful cryptanalysis can
be the result of:

        --  *Inadequately-sized keys*. A product with
            encryption capabilities that implements a
            strong cryptographic algorithm with an
            inadequately sized key is vulnerable to a
            "brute-force" attack.(18) Box 2.2 provides
            more detail.

        --  *Weak encryption algorithms or poorly designed
            products*. Some encryption algorithms and
            products have weaknesses that, if known to an
            attacker, require the testing of only a small
            fraction of the keys that could in principle
            be the proper key.

   +    *Product penetration*. Like weak encryption, certain
design choices such as limits on the maximum size of a
password, the lack of a reasonable lower bound on the size of
a password, or use of a random-number generator that is not
truly random may lead to a product that presents a work factor
for an attacker that is much smaller than the theoretical
strength implied by the algorithm it uses.(19)

   +    *Monitoring of electronic emissions*. Most electronic
communications devices emit electromagnetic radiation that is
highly correlated with the information carried or displayed on
them. For example, the contents of an unshielded computer
display or terminal can in principle be read from a distance
(estimates range from tens of meters to hundreds of meters) by
equipment specially designed to do so. Coined by a U.S.
government program, TEMPEST is the name of a class of
techniques to safeguard against monitoring of emissions.

   +    *Device penetration*. A software-controlled device can
be penetrated in a number of ways. For example, a virus may
infect it, making a clandestine change. A message or a file
can be sent to an unwary recipient who activates a hidden
program when the message is read or the file is opened; such
a program, once active, can record the keystrokes of the
person at the keyboard, scan the mass storage media for
sensitive data and transmit it, or make clandestine
alterations to stored data.

   +    *Infrastructure penetration*. The infrastructure used
to carry communications is often based on software-controlled
devices such as routers. Router sohware can be modified as
described above to copy and forward all (or selected) traffic
to an unauthorized interceptor.

   The last two techniques can be categorized as invasive,
because they alter the operating environment in order to
gather or modify information. In a network environment, the
most common mechanisms of invasive attacks are called viruses
and Trojan horses. A virus gains access to a system, hides
within that system, and replicates itself to infect other
systems. A Trojan horse exploits a weakness from within a
system. Either approach can result in intentional or
unintentional denial of services for the host system.(20)
Modern techniques for combining both techniques to covertly
exfiltrate data from a system are becoming increasingly
powerful and difficult to detect.(21) Such attacks will gain
in popularity as networks become more highly interconnected.

----------

   (16) A port is a point of connection to a given information
system to which another party (another system, an individual)
can connect.

   (17) "Caller ID," a feature that identifies the number of
the calling party, makes use of call-setup information carried
on the circuit.

   (18) A brute-force attack against an encryption algorithm
is a computer-based test of all possible keys for that
algorithm undertaken in an effort to discover the key that
actually has been used. Hence, the difficulty and time to
complete such attacks increase markedly as the key length
grows (specifically, the time doubles for every bit added to
the key length).

   (19) Work factor is used in this report to mean a measure
of the difficulty of undertaking a brute-force test of all
possible keys against a given ciphertext (and known
algorithm). A 40-bit work factor means that a brute-force
attack must test at most 2^40 keys to be certain that the
corresponding plaintext message is retrieved. In the
literature, the term "work factor" is also used to mean the
ratio of work needed for brute-force cryptanalysis of an
encrypted message to the work needed to encrypt that message.

   (20) On November 2, 1988, Robert T. Morris, Jr., released
a "worm" program that spread itself throughout the Internet
over the course of the next day. At trial, Morris maintained
that he had not intended to cause the effects that had
resulted, a belief held by many in the Internet community.
Morris was convicted on a felony count of unauthorized access.
See Peter G. Neumann, *Computer Related Risks*, Addison
Wesley, Reading, Mass., 1995, p. 133.

   (21) The popular World Wide Web provides an environment in
which an intruder can act to steal data. For example, an
industrial spy wishing to obtain data stored on the
information network of a large aerospace company can set up a
Web page containing information of interest to engineers at
the aerospace company (e.g., information on foreign aerospace
business contracts in the making), thereby making the page an
attractive site for those engineers to visit through the Web.
Once an engineer from the company has visited the spy's Web
page, a channel is set up by which the Web page could send
back a Trojan horse (TH) program for execution on the
workstation being used to look at the page. The TH could be
passed as part of any executable program (Java and Postscript
provide two such vehicles) that otherwise did useful things
but on the side collected data resident on that workstation
(and any other computers to which it might be connected). Once
the data was obtained, it could be sent back to the spy's Web
page during the same session, or e-mailed back, or sent during
the next session used to connect to that Web page.
Furthermore, because contacts with a Web page by design
provide the specific address from which the contact is coming,
the TH could be sent only to the aerospace company (and to no
one else), thus reducing the likelihood that anyone else would
stumble upon it. Furthermore, the Web page contact also
provides information about the workstation that is making the
contact, thus permitting a customized and specially debugged
TH to be sent to that workstation.

____________________________________________________________


               2.4 THE MARKET FOR CRYPTOGRAPHY


   Cryptography is a product as well as a technology. Products
offering cryptographic capabilities can be divided into two
general classes:

   +    *Security-specific or stand-alone* products that are
generally add-on items (often hardware, but sometimes
software) and often require that users perform an
operationally separate action to invoke the encryption
capabilities. Examples include an add-on hardware board that
encrypts messages or a program that accepts a plaintext file
as input and generates a ciphertext file as output.

   +    *Integrated* (often "general-purpose") products in
which cryptographic functions have been incorporated into some
software or hardware application package as part of its
overall functionality. An integrated product is designed to
provide a capability that is useful in its own right, as well
as encryption capabilities that a user may or may not use.
Examples include a modem with on-board encryption or a word
processor with an option for protecting (encrypting) files
with passwords.(22)

   In addition, an integrated product may provide sockets or
hooks to user-supplied modules or components that offer
additional cryptographic functionality. An example is a
software product that can call upon a user-supplied package
that performs certain types of file manipulation such as
encryption or file compression. Cryptographic sockets are
discussed in Chapter 7 as cryptographic applications
programming interfaces.

   A product with cryptographic capabilities can be designed
to provide data confidentiality, data integrity, and user
authentication in any combination; a given commercial
cryptographic product may implement functionality for any or
all of these capabilities. For example, a PC-Card may
integrate cryptographic functionality for secure
authentication and for encryption onto the same piece of
hardware, even though the user may choose to invoke these
functions independently. A groupware program for remote
collaboration may implement cryptography for confidentiality
(by encrypting messages sent between users) and cryptography
for data integrity and user authentication (by appending a
digital signature to all messages sent between users).
Further, this program may be implemented in a way that these
features can operate independently (either, both, or neither
may be operative at the same time).

   Because cryptography is usable only when it is incorporated
into a product, whether integrated or security-specific,
issues of supply and demand affect the use of cryptography.
The remainder of this section addresses both demand and supply
perspectives on the cryptography market.

----------

   (22) From a system design perspective, it is reasonable to
assert that word processing and database applications do not
have an intrinsic requirement for encryption capabilities and
that such capabilities could be better provided by the
operating system on which these applications operate. But as
a practical matter, operating systems often do not provide
such capabilities, and so vendors have significant incentives
to provide encryption capabilities that are useful to
customers who want better security.

____________________________________________________________


      2.4.1 The Demand Side of the Cryptography Market

   Chapter 1 discussed vulnerabilities that put the
information assets of businesses and individuals at risk. But
despite the presence of such risks, many organizations do not
undertake adequate information security efforts, whether those
efforts involve cryptography or any other tool. This section
explores some of the reasons for this behavior.


Lack of Security Awareness (and/or Need)

   Most people who use electronic communications behave as
though they regard their electronic communications as
confidential. Even though they may know in some sense that
their communications are vulnerable to compromise, they fail
to take precautions to prevent breaches in communications
security. Even criminals aware that they may be the subjects
of wiretaps have been overheard by law enforcement officials
to say, "This call is probably being wiretapped, but ... ,"
after which they go on to discuss incriminating topics.(23)

   The impetus for thinking seriously about security is
usually an event that is widely publicized and significant in
impact.(24) An example of responding to publicized problems is
the recent demand for encryption of cellular telephone
communications. In the past several years, the public has been
made aware of a number of instances in which traffic carried
over cellular telephones was monitored by unauthorized parties
(Appendix J). In addition, cellular telephone companies have
suffered enormous financial losses as the result of "cloning,"
an illegal practice in which the unencrypted ID numbers of
cellular telephones are recorded off the air and placed into
cloned units, thereby allowing the owner of the cloned unit to
masquerade as the legitimate user.(25) Even though many users
today are aware of such practices and have altered their
behavior somewhat (e.g., by avoiding discussion of sensitive
information over cellular telephone lines), more secure
systems such as GSM (the European standard for mobile
telephones) have gained only a minimal foothold in the U.S.
market.

   A second area in which people have become more sensitive to
the need for information security is in international
commerce. Many international business users are concerned that
their international business communications are being
monitored, and indeed such concerns motivate a considerable
amount of today's demand for secure communications.

   It is true that the content of the vast majority of
telephone communications in the United States (e.g., making a
dinner date, taking an ordinary business call) and data
communications (e.g., transferring a file from one computer to
another, sending an e-mail message) is simply not valuable
enough to attract the interest of most eavesdroppers.
Moreover, most communications links for point-to-point
communications in the United States are hard wired (e.g.,
fiber-optic cable) rather than wireless (e.g., microwave);
hardwired links are much more secure than wireless links.26 In
some instances, compromises of information security do not
directly damage the interests of the persons involved. For
example, an individual whose credit card number is improperly
used by another party (who may have stolen his wallet or
eavesdropped on a conversation) is protected by a legal cap on
the liability for which he is responsible.

---------

   (23) A case in point is that the officers charged in the
Rodney King beating used their electronic communications
system as though it were a private telephone line, even though
they had been warned that all traffic over that system was
recorded. In 1992, Rodney King was beaten by members of the
Los Angeles Police Department. A number of transcripts of
police radio conversations describing the incident were
introduced as evidence at the trial. Had they been fully
cognizant at the moment of the fact that all conversations
were being recorded as a matter of department policy, the
police officers in question most likely would not have said
what they did. Personal communication, Sara Kiesler, Carnegie
Mellon University, 1993.

   (24) It is widely believed that only a few percent of
computer break-ins are detected. See for example, Jane Bird,
"Hunting Down the Hackers," *Management Today*, July, 1994, p.
64 (reports that 1% of attacks are detected); Bob Brewin,
"Info Warfare Goes on Attack," *Federal Computer Week*, Volume
9(31), October 23, 1995, p. 1 (reports 2% detection); and Gary
Anthes, "Hackers Try New Tacks", *ComputerWorld*, January 30,
1995, p. 12 (reports 5% detection).

   (25) See for example, Bryan Miller, "Web of Cellular Phone
Fraud Widens," *New York Times*, July 20, 1995, p. C-1; and
George James, "3 Men Accused of Stealing Cellular Phone ID
Numbers," *New York Times*, October 19, 1995, p. B-3.

____________________________________________________________


Other Barriers Influencing Demand for Cryptography

   Even when a user is aware that communications security is
threatened and wishes to take action to forestall the threat,
a number of practical considerations can affect the decision
to use cryptographic protection. These considerations include
the following:

   +    *Lack of critical mass*. A secure telephone is not of
much use if only one person has it. Ensuring that
communications are secure requires collective action -- some
critical mass of interoperable devices is necessary in order
to stimulate demand for secure communications. To date, such
a critical mass has not yet been achieved.

   +    *Uncertainties over government policy*. Policy often
has an impact on demand. A number of government policy
decisions on cryptography have introduced uncertainty, fear,
and doubt into the marketplace and have made it difficult for
potential users to plan for the future. Seeing the controversy
surrounding policy in this area, potential vendors are
reluctant to bring to market products that support security,
and potential users are reluctant to consider products for
security that may become obsolete in the future in an unstable
legal and regulatory environment.

   +    *Lack of a supporting infrastructure*. The mere
availability of devices is not necessarily sufficient. For
some applications such as secure interpersonal communications,
a national or international infrastructure for managing and
exchanging keys could be necessary. Without such an
infrastructure, encryption may remain a niche feature that is
usable only through ad hoc methods replicating some of the
functions that an infrastructure would provide and for which
demand would thus be limited. Section 2.5 describes some
infrastructure issues in greater detail.

   +    *High cost*. To date, hardware-based cryptographic
security has been relatively expensive, in part, becaus of the
high cost of stand-alone products made in relatively small
numbers. A user that initially deploys a system without
security features and subsequently wants to add them can be
faced with a very high cost barrier, and consequently there is
a limited market for security add-on products.

   On the other hand, the marginal cost of implementing
cryptographic capabilities in software at the outset is
rapidly becoming a minor part of the overall cost, and so
cryptographic capabilities are likely to appear in all manner
and types of integrated software products where there might be
a need.

   +    *Reduced performance*. The implementation of
cryptographic functions often consumes computational resources
(e.g., time, memory). In some cases, excessive consumption of
resources makes encryption too slow or forces the user to
purchase additional memory. If encrypting the communications
link over which a conversation is carried delays that
conversation by more than a few tenths of a second, users may
well choose not to use the encryption capability.

   +    *A generally insecure environment*. A given network or
operating system may be so inherently insecure that the
addition of cryptographic capabilities would do little to
improve overall security. Moreover, retrofitting security
measures atop an inherently insecure system is generally
difficult.

   +    *Usability*. A product's usability is a critical
factor in its market acceptability. Products with encryption
capabilities that are available for use but are in fact unused
do not increase information security. Such products may be
purchased but not used for the encryption they provide because
such use is too inconvenient in practice, or they may not be
purchased at all because the capabilities they provide are not
aligned well with the needs of their users. In general, the
need to undertake even a modest amount of extra work or to
tolerate even a modest inconvenience for cryptographic
protection that is not directly related to the primary
function of the device is likely to discourage the use of such
protection.(27) When cryptographic features are well
integrated in a way that does not demand case-bycase user
intervention, i.e., when such capabilities can be invoked
transparently to the average user, demand may well increase.

   +    *Lack of independent certification or evaluation of
products*. Certification of a product's quality is often
sought by potential buyers who lack the technical expertise to
evaluate product quality or who are trying to support certain
required levels of security (e.g., as the result of bank
regulations). Many potential users are also unable to detect
failures in the operation of such products.(28) With one
exception discussed in Chapter 6, independent certification
for products with integrated encryption capabilities is not
available, leading to market uncertainty about such products.

   +    *Electronic commerce*. An environment in which secure
communications were an essential requirement would do much to
increase the demand for cryptographic security.(29) However,
the demand for secure communications is currently nascent.

   +    *Uncertainties arising from intellectual property
issues*. Many of the algorithms that are useful in
cryptography (especially public-key cryptography) are
protected by patents. Some vendors are confused by the fear,
uncertainty, and doubt caused by existing legal arguments
among patent holders. Moreover, even when a patent on a
particular algorithm is undisputed, many users may resist its
use because they do not wish to pay the royalties.(30)

   +    *Lack of interoperability and standards*. For
cryptographic devices to be useful, they must be
interoperable. In some instances, the implementation of
cryptography can affect the compatibility of systems that may
have interoperated even though they did not conform strictly
to interoperability standards. In other instances, the
specific cryptographic algorithm used is yet another function
that must be standardized in order for two products to
interoperate. Nevertheless, an algorithm is only one piece of
a cryptographic device, and so two devices that implement the
same cryptographic algorithm may still not interoperate.(31)
Only when two devices conform fully to a single
interoperability standard (e.g., a standard that would specify
how keys are to be exchanged, the formatting of the various
data streams, the algorithms to be used for encryption and
decryption, and so on) can they be expected to interoperate
seamlessly.

   An approach gaining favor among product developers is
protocol negotiation,(32) which calls for two devices or
products to mutually negotiate the protocol that they will use
to exchange information. For example, the calling device may
query the receiving device to determine the right protocol to
use. Such an approach frees a device from having to conform to
a single standard and also facilitates the upgrading of
standards in a backward-compatible manner.

   +    *The heterogeneity of the communications
infrastructure*. Communications are ubiquitous, but they are
implemented through a patchwork of systems and technologies
and communications protocols rather than according to a single
integrated design. In some instances, they do not conform
completely to the standards that would enable full
interoperability. In other instances, interoperability is
achieved by intermediate conversion from one data format to
another. The result can be that transmission of encrypted data
across interfaces interferes with achieving connectivity among
disparate systems. Under these circumstances, users may be
faced with a choice of using unencrypted communications or not
being able to communicate with a particular other party at
all.(33)

----------

   (26) A major U.S. manufacturer reported to the committee
that in the late 1980s, it was alerted by the U.S. government
that its microwave communications were vulnerable. In
response, this manufacturer took steps to increase the
capacity of its terrestrial communication links, thereby
reducing its dependence on microwave communications. A similar
situation was faced by IBM in the 1970s. See William Broad,
"Evading the Soviet Ear at Glen Cove," *Science*, Volume
217(3), 1982, pp. 910-911.

   (27) For example, experience with current secure telephones
such as the STU-III suggests that users of such phones may be
tempted, because of the need to contact many people, to use
them in a nonsecure mode more often than not.

   (28) Even users who do buy security products may still be
unsatisfied with them. For example, in two consecutive surveys
in 1993 and 1994, a group of users reported spending more and
being less satisfied with the security products they were
buying. See Dave Powell, "Annual Infosecurity Industry
Survey," *Infosecurity News*, March/April, 1995, pp. 20-27.

   (29) AT&T plans to take a non-technological approach to
solving some of the security problems associated with retail
Internet commerce. AT&T has announced that it will insure its
credit-card customers against unauthorized charges, as long as
those customers were using AT&T's service to connect to the
Internet. This action was taken on the theory that the real
issue for consumers is the fear of unauthorized charges,
rather than fears that confidential data per se would be
compromised. See Thomas Weber, "AT&T Will Insure Its Card
Customers on Its Web Service," *Wall Street Journal*, February
7, 1996, pp. B-5.

   (30) See for example, James Bennett, "The Key to Universal
Encryption," *Strategic Investment*, December 20, 1995, pp.
12-13.

   (31) Consider the Data Encryption Standard (DES) as an
example. DES is a symmetric encryption algorithm, first
published in 1975 by the U.S. govemment, that specifies a
unique and well-defined transformation when given a specific
56-bit key and a block of text, but the various details of
operation within which DES is implemented can lead to
incompatibilities with other systems that include DES, with
stand-alone devices incorporating DES, and even with
software-implemented DES.

   Specifically, how the infommation is prepared prior to
being encrypted (e.g., how it is blocked into chunks) and
after the encryption (how the encrypted data is modulated on
the communications line) will affect the interoperability of
communications devices that may both use DES. In addition, key
management may not be identical for DES-based devices
developed independently. DES-based systems for file encryption
generally require a user-generated password to generate the
appropriate 56-bit DES key, but since the DES standard does
not specify how this aspect of key management is to be
performed, the same password used on two independently
developed DES-based systems may not result in the same 56-bit
key. For these and similar reasons, independently developed
DES-based systems cannot necessarily be expected to
interoperate.

   (32) Transmitting a digital bit stream requires that the
hardware carrying that stream be able to interpret it.
Interpretation means that regardless of the content of the
communications (e.g., voice, pictures), the hardware must know
what part of the bit stream represents information useful to
the ultimate receiver and what part represents information
useful to the carrier. A communications protocol is an
agreed-upon convention about how to interpret any given bit
stream and includes the specification of any encryption
algorithm that may be used as part of that protocol.

   (33) An analogous example is the fact that two Internet
users may find it very difficult to use e-mail to transport a
binary file between them, because the e-mail systems on either
end may well implement standards for handling binary files
differently, even though they may conform to all relevant
standards for carrying ASCII text.

____________________________________________________________


      2.4.2 The Supply Side of the Cryptography Market

   The supply of products with encryption capabilities is
inherently related to the demand for them. Cryptographic
products result from decisions made by potential vendors and
users as well as standards determined by industry and/or
government. Use depends on availability as well as other
important factors such as user motivation, relevant learning
curves, and other nontechnical issues. As a general rule, the
availability of products to users depends on decisions made by
vendors to build or not to build them, and all of the
considerations faced by vendors of all type of products are
relevant to products with encryption capabilities.

   In addition to user demand, vendors need to consider the
following issues before deciding to develop and market a
product with encryption capabilities:

   +    *Accessibility of the basic knowledge underlying
cryptography*. Given that various books, technical articles,
and government standards on the subject of cryptography have
been published widely over the past 20 years, the basic
knowledge needed to design and implement cryptographic systems
that can frustrate the best attempts of anyone (including
government intelligence agencies) to penetrate them is
available to government and nongovernment agencies and parties
both here and abroad. For example, because a complete
description of DES is available worldwide, it is relatively
easy for anyone to develop and implement an encryption system
that involves multiple uses of DES to achieve much stronger
security than that provided by DES alone.

   +    *The skill to implement basic knowledge of
cryptography*. A product with encryption capabilities involves
much more than a cryptographic algorithm. An algorithm must be
implemented in a system, and many design decisions affect the
quality of a product even if its algorithm is mathematically
sound. Indeed, efforts by multiple parties to develop products
with encryption capabilities based on the same algorithm could
result in a variety of manufactured products with varying
levels of quality and resistance to attack.

   For example, although cryptographic protocols are not part
and parcel of a cryptographic algorithm per se, these
protocols specify how critical aspects of a product will
operate. Thus, weaknesses in cryptographic protocols -- such
as a key generation protocol specifying how to generate and
exchange a specific encryption key for a given message to be
passed between two parties or a key distribution protocol
specifing how keys are to be distributed to users of a given
product can compromise the confidentiality that a real product
actually provides, even though the cryptographic algorithm and
its implementation are flawless.(34)

   +    *The skill to integrate the cryptography into a usable
product*. Even a product that implements a strong
cryptographic algorithm in a competent manner is not valuable
if the product is unusable in other ways. For integrated
products with encryption capabilities, the noncryptographic
functions of the product are central, because the primary
purpose of an integrated product is to provide some useful
capability to the user (e.g., word processing, database
management, communications) that does not involve cryptography
per se; if cryptography interferes with this primary
functionality, it detracts from the product's value.

   In this area, U.S. software vendors and system integrators
have distinct strengths, (35) even though engineering talent
and cryptographic expertise are not limited to the United
States. For example, foreign vendors do not market integrated
products with encryption capabilities that are sold as
mass-market software, whereas many such U.S. products are
available.(36)

   +    *The cost of developing maintaining, and upgrading an
economically viable product with encryption capabilities*. The
technical aspects of good encryption are increasingly well
understood. As a result, the incremental cost of designing a
software product so that it can provide cryptographic
functionality to end users is relatively small. As cost
barriers to the inclusion of cryptographic functionality are
reduced dramatically, the long-term likelihood increases that
most products that process digital information will include
some kinds of cryptographic functionality.

   +    *The suitability of hardware vs. software* as a medium
in which to implement a product with encryption capabilities.
The duplication and distribution costs for software are very
low compared to those for hardware, and yet, trade secrets
embedded in proprietary hardware are easier to keep than those
included in software. Moreover, software cryptographic
functions are more easily disabled.

   +    *Nonmarket considerations and export controls*.
Vendors may withhold or alter their products at government
request. For example, a well-documented instance is the fact
that AT&T voluntarily deferred the introduction of its 3600
Secure Telephone Unit (STU) at the behest of government (see
Appendix E on the history of current cryptography policy and
Chapter 6 on government influence.) Export controls also
affect decisions to make products available even for domestic
use, as described in Chapter 4.

----------

   (34) An incident that demonstrates the importance of the
nonalgorithm aspects of a product is the failure of the
key-generation process for the Netscape Navigator Web browser
that was discovered in 1995; a faulty random number generation
used in the generation of keys would enable an intruder
exploiting this flaw to limit a brute-force search to a much
smaller number of keys than would generally be required by the
40-bit key length used in this product. See John Markoff,
"Security Flaw Is Discovered in Software Used in Shopping,"
*New York Times*, September 19, 1995, p. A1. A detailed
discussion of protocol failures can be found in Gustavus
Simmons, "Cryptanalysis and Protocol Failures,"
*Communications of the ACM*, Volume 37(11), 1994, pp. 56-65.

   (35) Computer Science and Telecommunications Board (CSTB),
National Research Council, *Keeping the U.S. Computer Industry
Competitive: Systems Integration*, National Academy Press,
Washington, D.C., 1992.

   (36) For example, the Department of Commerce and the
National Security Agency found no general-purpose software
products with encryption capability from non-U.S.
manufacturers. See Department of Commerce and National
Security Agency, *A Study of the International Market for
Computer Software with Encryption*, January 11, 1996, p.
III-9.

____________________________________________________________


    2.5 INFRASTRUCTURE FOR WIDESPREAD USE OF CRYPTOGRAPHY


   The widespread use of cryptography requires a support
infrastructure that can service organizational or individual
user needs with regard to cryptographic keys.


             2.5.1 Key Management Infrastructure

   In general, to enable use of cryptography across an
enterprise, there must be a mechanism that:

   +    Periodically supplies all participating locations with
keys (typically designated for use during a given calendar or
time period -- the crypto-period) for either stored materials
or communications; or

   +    Permits any given location to generate keys for itself
as needed (e.g., to protect stored files); or

   +    Can securely generate and transmit keys among
communicating parties (e.g., for data transmissions, telephone
conversations).

   In the most general case, any given location will have to
perform all three functions. With symmetric systems, the
movement of keys from place to place obviously must be done
securely and with a level of protection adequate to counter
the threats of concern to the using parties. Whatever the
distribution system, it clearly must protect the keys with
appropriate safeguards and must be prepared to identify and
authenticate the source. The overall task of securely assuring
availability of keys for symmetric applications is often
called key management.

   If all secure communications take place within the same
corporation or among locations under a common line of
authority, key management is an internal or possibly a joint
obligation. For parties that communicate occasionally or
across organizational boundaries, mutual arrangements must be
formulated for managing keys. One possibility might be a
separate trusted entity whose line of business could be to
supply keys of specified length and format, on demand and for
a fee.

   With asymmetric systems, the private keys are usually
self-generated, but they may also be generated from a central
source, such as a corporate security office. In all cases,
however, the handling of private keys is the same for
symmetric and asymmetric systems, they must be guarded with
the highest levels of security. Although public keys need not
be kept secret, their integrity and association with a given
user are extremely important and should also be supported with
extremely robust measures.

   The costs of a key management infrastructure for national
use are not known at this time. One benchmark figure is that
the cost of the Defense Department infrastructure needed to
generate and distribute keys for approximately 320,000 STU-III
telephone users is somewhere in the range of $10 million to
$13 million per year.(37)

----------

   (37) William Crowell, deputy director, National Security
Agency, personal communication, April 1995.

____________________________________________________________


              2.5.2 Certificate Infrastructures

   The association between key information (such as the name
of a person and the related public key) and an individual or
organization is an extremely important aspect of a
cryptographic system. That is, it is undesirable for one
person to be able to impersonate another. To guard against
impersonation, two general types of solutions have emerged: an
organization-centric approach consisting of certificate
authorities and a user-centric approach consisting of a web of
trust.

   A certificate authority serves to validate information that
is associated with a known individual or organization.
Certificate authorities can exist within a single
organization, across multiple related organizations, or across
society in general. Any number of certificate authorities can
coexist, and they may or may not have agreements for
crosscertification, whereby if one authority certifies a given
person, then another authority will accept that certification
within its own structure. Certificate authority hierarchies
are defined in the Internet RFCs 1421-1424, the X.509
standard, and other emerging commercial standards, such as
that proposed by MasterCar/Visa. A number of private
certificate authorities, such as VeriSign, have also begun
operation to service secure massmarket software products, such
as the Netscape Navigator Web browser.

   Among personal acquaintances validation of public keys can
be passed along from person to person or organization to
organization, thus creating a web of trust in which the entire
ensemble is considered to be trusted based on many individual
instances of trust. Such a chain of trust can be established
between immediate parties, or from one party to a second to
establish the credentials of a third. This approach has been
made popular by the Pretty Good Privacy (PGP) software
product; all users maintain their own "key-ring," which holds
the public keys of everyone with whom they want to
communicate.

   Importantly, it should be noted that both the certificate
authority approach and the web of trust approach replicate the
pattern of trust that already exists among participating
parties in societal and business activities. In a sense, the
certificate infrastructure for cryptography simply formalizes
and makes explicit what society and its institutions are
already accustomed to.

   At some point, banks, corporations, and other organizations
already generally trusted by society will start to issue
certificates. At that time, individuals especially may begin
to feel more comfortable about the cryptographic undergirding
of society's electronic infrastructure, at which point the
webs of trust can be expected to evolve according to
individual choices and market forces. However, it should be
noted that different certificates will be used for different
functions, and it is unlikely that a single universal
certificate infrastructure will satisfy all societal and
business needs. For example, because an infrastructure
designed to support electronic commerce and banking may do no
more than identify valid purchasers, it may not be useful for
providing interpersonal communication or corporate access
control.

   Certificate authorities already exist within some
businesses, especially those that have moved vigorously into
an electronic way of life. Generally, there is no sense of a
need for a legal framework to establish relationships among
organizations, each of which operates its own certificate
function. Arrangements exist for them to cross-certify one
another; in general, the individual(s) authorizing the
arrangement will be a senior officer of the corporation, and
the decision will be based on the existence of other legal
agreements already in place, notably, contracts that define
the relationships and obligation among organizations.

   For the general business world in which any individual or
organization wishes to conduct a transaction with any other
individual or organization, such as the sale of a house, a
formal certificate infrastructure has yet to be created. There
is not even one to support just a digital signature
application within government. Hence, it remains to be seen
how, in the general case, individuals and organizations will
make the transition to an electronic society.

   Certificate authorities currently operate within the
framework of contractual law. That is, if some problem arises
as the result of improper actions on the part of the
certification authority, its subscribers would have to pursue
a civil complaint. As certificate authorities grow in size and
service a greater part of society, it will probably be
necessary to regulate their actions under law, much like those
of any major societal institutions.(38) It is interesting to
observe that the legal and operational environment that will
have to exist for certificate organizations involves the same
set of issues that are pertinent to escrow organizations (as
discussed in Chapter 5).

----------

   (38) Shimshon Berkovits et al., *Public Key Infrastructure
Study: Final Report*, National Institute of Standards and
Technology, Gaithersburg, Maryland, April 1994. Performed
under contract to MITRE, this study is summarized in Appendix
H.

____________________________________________________________


                          2.6 RECAP


   Cryptography provides important capabilities that can help
deal with the vulnerabilities of electronic information.
Cryptography can help to assure the integrity of data, to
authenticate the identity of specific parties, to prevent
individuals from plausibly denying that they have signed
something, and to preserve the confidentiality of information
that may have improperly come into the possession of
unauthorized parties. At the same time, cryptography is not a
silver bullet, and many technical and human factors other than
cryptography can improve or detract from information security.
In order to preserve information security, attention must be
given to all of these factors. Moreover, people can use
cryptography only to the extent that it is incorporated into
real products and systems; unimplemented cryptographic
algorithms cannot contribute to information security. Many
factors other than raw mathematical knowledge contribute to
the supply of and demand for products with cryptographic
functionality. Most importantly, the following aspects
influence the demand for cryptographic functions in products:

   +    Critical mass in the marketplace,

   +    Government policy,

   +    Supporting infrastructure,

   +    Cost,

   +    Performance,

   +    Overall security environment,

   +    Usability,

   +    Quality certification and evaluation, and

   +    Interoperability standards.

Finally, any large-scale use of cryptography, with or without
key escrow (discussed later in Chapter 5), depends on the
existence of a substantial supporting infrastructure, the
deployment of which raises a different set of problems and
issues.

____________________________________________________________

  BOX 2.1 The Evolution of the Telecommunications Industry

   Prior to 1984, the U.S. telecommunications industry was
dominated by one primary player -- AT&T. An elaborate
regulatory structure had evolved in the preceding decades to
govern what had become an essential national service on which
private citizens, govemment, and business had come to rely.

   By contrast, the watchword in telecommunications a mere
decade later has become competition. AT&T is still a major
player in the field, but the regional Bell operating companies
(RBOCs), separated from AT&T as part of the divestiture
decision of 1984, operate entirely independently, providing
local services. Indeed, the current mood in Congress toward
deregulation is already causing increasingly active
competition and confrontation among all of the players
involved, including cable TV companies, cellular and mobile
telephone companies, the long-distance telecommunications
companies (AT&T, MCI, Sprint, and hundreds of others), the
RBOCs and other local exchange providers, TV and radio
broadcast companies, entertainment companies, and satellite
communications companies. Today, all of these players compete
for a share of the telecommunications pie in the same
geographic area; even railroads and gas companies (which own
geographic rights of way along which transmission lines can be
laid) and power companies (which have wires going to every
house) have dreams of profiting from the telecommunications
boom. The playing field is even further complicated by the
fact of reselling -- institutions often buy telecommunications
services from "primary" providers in bulk to serve their own
needs and resell the excess to other customers.

   In short, today's telecommunications industry is highly
heterogeneous and widely deployed with multiple public and
private service providers, and will become more so in the
future.

____________________________________________________________

       BOX 2.2 Fundamentals of Cryptographic Strength

   Cryptographic strength depends on two factors: the size of
the key, and the mathematical structure of the algorithm
itself. For well-designed symmetric cryptographic systems,
"brute-force" exhaustive search -- trying all possible keys
with a given decryption algorithm until the (meaningful)
plaintext appears -- is the best publicly known cryptanalytic
method. For such systems the work factor (i.e., the time to
cryptanalyze) grows exponentially with key size. Hence, with
a sufficiently long key, even an eavesdropper with very
extensive computing resources would have to take a very long
time (longer than the age of the universe) to test all
possible combinations. Adding one binary digit (bit) to the
length of a key doubles the length of time it takes to
undertake a brute-force attack while adding only a very small
increment to the time it takes to encrypt the plaintext.

   How long is a "long" key? To decipher by brute force a
message encrypted with a 40-bit key requires 2^40
(approximately 10^12) tests. If each test takes 10^-6 seconds
to conduct, 1 million seconds of testing time on a single
computer are required to conduct a brute-force attack, or
about 11.5 days. A 56-bit key increases this time by a factor
of 2^16, or 65,536; under the same assumptions, a brute-force
attack on a message encrypted with a 56-bit key would take
over 2,000 years.

   Two important considerations mitigate the bleakness of this
conclusion from the perspective of the interceptor. One is
that computers can be expected to grow more powerful over
time. Speed increases in the underlying silicon technology
have exhibited a predictable pattern for the past 50 years --
computational speed doubles every 18 months (Moore's law),
equivalent to increasing by a factor of 10 every 5 years.
Thus, if a single test takes 10^-6 seconds today, in 15 years,
it can be expected to take 10^-9 seconds. Additional speedup
is possible using parallel processing. Some supercomputers use
tens of thousands of microprocessors in parallel, and
cryptanalytic problems are particularly well-suited to
parallel processing. Even 1,000 processors working in
parallel, each using the underlying silicon technology of 15
years hence, would be able to decrypt a single 56-bit
encrypted message in 18 hours.

   As for the exploitation of alternatives to brute-force
search, all known asymmetric (i.e., public-key) cryptographic
systems allow shortcuts to exhaustive search. Because more
information is public in such systems, it is also likely that
shortcut attacks will exist for any new systems invented.
Shortcut attacks also exist for poorly designed symmetric
systems. Newly developed shortcut attacks constitute
unforeseen breakthroughs, and so by their very nature
introduce an unpredictable "wild card" into the effort to set
a reasonable key size. Because such attacks are applicable
primarily to public-key systems, larger key sizes and larger
safety margins are needed for such systems than for symmetric
cryptographic systems. For example, factoring a 512-bit number
by exhaustive search would take 2^256 tests (since at least
one factor must be less than 2^256); known shortcut attacks
would allow such numbers to be factored in approximately 2^65
operations, a number on the order of that required to
undertake a brute-force exhaustive search of a message
encrypted with a 64-bit symmetric cryptographic system. While
symmetric 64-bit systems are considered relatively safe, fear
of future breakthroughs in cryptanalyzing public-key systems
has led many cryptographers to suggest a minimum key size of
1,024 bits for public-key systems, thereby providing in key
length a factor-of-two safety margin over the safety afforded
by 512-bit keys.

   More discussion of this topic can be found in Appendix C.

[End Chapter 2]

____________________________________________________________






