Note: for index of full report see: http://jya.com/nrcindex.htm

---------

[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                          Part III


        Policy Options, Findings and Recommendations


   Part III consists of two chapters. Chapter 7 considers a
wide range of policy options, ranging in scope and scale from
large to small. Not every item described in Chapter 7 has been
deemed worthy for adoption by the committee, but the committee
hopes to broaden the public understanding of cryptography
policy by discussing ideas that at least have the support of
respectable and responsible elements of the various
stakeholding communities.

   Chapter 8 is a synthesizing chapter that brings together
threads of the previous seven chapters and presents the
committee's findings and recommendations.

____________________________________________________________


                              7

                Policy Options for the Future


   Current national cryptography policy defines only one point
in the space of possible policy options. A major difficulty in
the public debate over cryptography policy has been incomplete
explanation of why the govermnent has rejected certain policy
options. Chapter 7 explores a number of possible alternatives
to current national cryptography policy, selected by the
committee either because they address an important dimension
of national cryptography policy or because they have been
raised by a particular set of stakeholders. Although in the
committee's judgment these alternatives deserve analysis, it
does not follow that they necessarily deserve consideration
for adoption. The committee's judgments about appropriate
policy options are discussed in Chapter 8.


         7.1 EXPORT CONTROL OPTIONS FOR CRYPTOGRAPHY


         7.1.1 Dimensions of Choice for Controlling
                 the Export of Cryptography

   An export control-regime -- a set of laws and regulations
governing what may or may not be exported under any specified
set of circumstances -- has many dimensions that can be
considered independently. These dimensions include:

   +    *The type of export license granted*. Three types of
export licenses are available:

        -- A general license, under which export of an item
        does not in general require prior government approval
        but nonetheless is tracked under an export
        declaration;

        -- A special license, under which prior government
        approval is required but which allows multiple and
        continuing transactions under one license validation;
        and

        -- An individual license, under which prior government
        approval is required for each and every transaction.

As a general rule, only individual licenses are granted for
the export of items on the U.S. Munitions List, which includes
"strong" cryptography.(1)

   +    *The strength of a product's cryptographic
capabilities*. Current policy recognizes the difference
between RC2/RC4 algorithms using 40-bit keys and other types
of cryptography, and places fewer and less severe restrictions
on the former.

   +    *The default encryption settings on the delivered
product*. Encryption can be tacitly discouraged, but not
forbidden, by the use of appropriate settings.(2)

   +    *The type of product*. Many different types of
products can incorporate encryption capabilities. Products can
be distinguished by medium (e.g., hardware vs. software)
and/or intended function (e.g., computer vs. communications).

   +    *The extent and nature of features that allow
exceptional access*. The Administration has suggested that it
would permit the export of encryption software with key
lengths of 64 bits or less if the keys were "properly
escrowed."(3) Thus, inclusion in a product of a feature for
exceptional access could be made one condition for allowing
the export of that product. In addition, the existence of
specific institutional arrangements (e.g., which specific
parties would hold the information needed to implement
exceptional access) might be made a condition for the export
of these products.

   +    *The ultimate destination or intended use of the
delivered product*. U.S. export controls have long
distinguished between exports to "friendly" and "hostile"
nations. In addition, licenses have been granted for the sale
of certain controlled products only when a particular benign
use (e.g., financial transactions) could be certified. A
related consideration is the extent to which nations cooperate
with respect to re-export of a controlled product and/or
export of their own products. For example, CoCom member
nations(4) in principle agreed to joint controls on the export
of certain products to the Eastern bloc; as a result, certain
products could be exported to CoCom member nations much more
easily than to other nations.

   At present, there are few clear guidelines that enable
vendors to design a product that will have a high degree of
assurance of being exportable (Chapters 4 and 6). Table 7.1
describes various mechanisms that might be used to manage the
export of products with encryption capabilities.

   This remainder of Section 7.1 describes a number of options
for controlling the export of cryptography, ranging from the
sweeping to the detailed.

----------

   (1)  However, as noted in Chapter 4, the current export
control regime for cryptography involves a number of
categorical exemptions as well as some uncodified
"in-practice" exemptions.

   (2)  Software, and even software-driven devices, commonly
have operational parameters that can be selected or set by a
user. An example is the fax machine that allows many user
choices to be selected by keyboard actions. The parameters
chosen by a manufacturer before it ships a product are
referred to as the "defaults" or "default condition." Users
are generally able to alter such parameters at will.

   (3)  As the time of this writing, the precise definition of
"properly escrowed" is under debate and review in the
Administration. The most recent language on this definition as
of December 1995 is provided in Chapter5.

   (4)  CoCom refers to the Coordinating Committee, a group of
Western nations (and Japan) that agreed to a common set of
export control practices during the Cold War to control the
export of militarily useful technologies to Eastern bloc
nations. CoCom was disbanded in March 1994, and a successor
regime known as the New Forum is being negotiated as this
report is being written.

____________________________________________________________


                7.1.2 Complete Elimination of
               Export Controls on Cryptography

   The complete elimination of export controls (both the USML
and the Commerce Control List controls) on cryptography is a
proposal that goes beyond most made to date, although
certainly such a position has advocates. If export controls on
cryptography were completely eliminated, it is possible that
within a short time, most information technology products
exported from the United States would have encryption
capabilities. It would be difficult for the U.S. government to
influence the capabilities of these products, or even to
monitor their deployment and use worldwide, because numerous
vendors would most probably be involved.

   Note, however, that the simple elimination of U.S. export
controls on cryptography does not address the fact that other
nations may have import controls and/or restrictions on the
use of cryptography internally. Furthermore, it takes time to
incorporate products into existing infrastructures, and slow
market growth may encourage some vendors to take their time in
developing new products. Thus, simply eliminating U.S. export
controls on cryptography would not ensure markets abroad for
U.S. products with encryption capabilities; indeed, the
elimination of U.S. export controls could in itself stimulate
foreign nations to impose import controls more stringently.
Appendix G contains more discussion of these issues.

   The worldwide removal of all controls on the export,
import, and use of products with encryption capabilities would
likely result in greater standardization of encryption
techniques. Standardization brought about in this manner would
result in:

   +    Higher degrees of international interoperability of
these products;

   +    Broader use, or at least more rapid spread, of
encryption capabilities as the result of the strong
distribution capabilities of U.S. firms;

   +    Higher levels of confidentiality, as a result of
greater ease in adopting more powerful algorithms and longer
keys as standards; and

   +    Greater use of cryptography by hostile, criminal, and
unfriendly parties as they, too, begin to use commercial
products with strong encryption capabilities.

   On the other hand, rapid, large-scale standardization would
be unlikely unless a few integrated software products with
encryption capabilities were able to achieve worldwide usage
very quickly. Consider, for example, that although there are
no restrictions on domestic use of cryptography in the United
States, interoperability is still difficult, in many cases
owing to variability in the systems in which the cryptography
is embedded. Likewise, many algorithms stronger than DES are
well known, and there are no restrictions in place on the
domestic use of such algorithms, and yet only DES even
remotely approaches common usage (and not all DES-based
applications are interoperable).

   For reasons well articulated by the national security and
law enforcement communities (see Chapter 3) and accepted by
the committee, the complete elimination of export controls on
products with encryption capabilities does not seem reasonable
in the short term. Whether export controls will remain
feasible and efficacious in the long-term has yet to be seen,
although clearly, maintaining even their current level of
effectiveness will become increasingly difficult.


         7.1.3 Transfer of All Cryptography Products
                to the Commerce Control List

   As discussed in Chapter 4, the Commerce Control List (CCL)
complements the U.S. Munitions List (USML) in controlling the
export of cryptography. (Box 4.2 in Chapter 4 describes the
primary difference between the USML and the CCL.) In 1994,
Representative Maria Cantwell (D-Washington) introduced
legislation to transfer all massmarket software products
involving cryptographic functions to the CCL. Although this
legislation never passed, it resulted in the promise and
subsequent delivery of an executive branch report on the
international market for computer software with encryption.(5)

   The Cantwell bill was strongly supported by the software
industry because of the liberal consideration afforded
products controlled for export by the CCL. Many of the bill's
advocates believed that a transfer of jurisdiction to the
Commerce Department would reflect an explicit recognition of
cryptography as a commercial technology that should be
administered under a dual-use export control regime. Compared
to the USML, they argued that the CCL is a more balanced
regime that still has considerable effectiveness in limiting
exports to target destinations and end users.

   On the other hand, national security officials regard the
broad authorities of the Arms Export Control Act (AECA) as
essential to the effective control of encryption exports. The
AECA provides authority for case-by-case regulation of exports
of cryptography to all destinations, based on national
security considerations. In particular, licensing decisions
are not governed by factors such as the country of
destination, end users, end uses, or the existence of
bilateral or multilateral agreements that often limit the
range of discretionary action possible in controlling exports
pursuant to the Export Administration Act. Further, the
national security provisions of the AECA provide a basis for
classifying the specific rationale for any particular export
licensing decision made under its authority, thus protecting
what may be very sensitive information about the particular
circumstances surrounding that decision.

   Although sympathetic to the Cantwell bill's underlying
rationale, the committee believes that the Cantwell bill does
not address the basic dilemma of cryptography policy. As
acknowledged by some of the bill's supporters, transfer of a
product's jurisdiction to the CCL does not mean automatic
decontrol of the product, and national security authorities
could still have considerable input into how exports are
actually licensed. In general, the committee believes that the
idea of split jurisdiction, in which some types of
cryptography are controlled under the CCL and others under the
USML, makes considerable sense given the various national
security implications of widespread use of encryption.
However, where the split should be made is a matter of
discussion; the committee expresses its own judgments on this
point in Chapter 8.

----------

   (5)  U.S. Department of Commerce and National Security
Agency, *A Study of the International Market for Computer
Software with Encryption*, prepared for the Interagency
Working Group on Encryption and Telecommunications Policy,
undated (released on January 11, 1996, by the U.S. Department
of Commerce, Office of the Secretary).

____________________________________________________________


                 7.1.4 End-use Certification

   Explicitly exempted under the current International Traffic
in Arms Regulations (ITAR) is the export of cryptography for
ensuring the confidentiality of financial transactions,
specifically for cryptographic equipment and software that are
"specially designed, developed or modified for use in machines
for banking or money transactions, and restricted to use only
in such transactions."(6) In addition, according to senior
National Security Agency (NSA) officials, cryptographic
systems, equipment, and software are in general freely
exportable for use by U.S.-controlled foreign companies and to
banking and financial institutions for purposes other than
financial transactions, although NSA regards these approvals
as part of the case-by-case review associated with equipment
and products that do not enjoy an explicit exemption in the
ITAR.

   In principle, the ITAR could explicitly exempt products
with encryption capabilities for use by foreign subsidiaries
of U.S. companies, foreign companies that are U.S.controlled,
and banking and financial institutions. Explicit "vertical"
exemptions for these categories could do much to alleviate
confusion among users, many of whom are currently uncertain
about what cryptographic protection they may be able to use in
their international communications, and could enable vendors
to make better informed judgments about the size of a given
market.

   Specific vertical exemptions could also be made for
different industries (e.g., health care or manufacturing) and
perhaps for large foreign-owned companies that would be both
the largest potential customers and the parties most likely to
be responsible corporate citizens. Inhibiting the diversion to
other uses of products with encryption capabilities sold to
these companies could be the focus of explicit contractual
language binding the recipient to abide by certain terms that
would be required of any vendor as a condition of sale to a
foreign company, as it is today under USML procedures under
the ITAR. Enforcement of end-use restrictions is discussed in
Chapter 4.

----------

   (6)  International Traffic in Arms Regulations, Section
121.1, Category XIII (b)(1)(ii).

____________________________________________________________


        7.1.5 Nation-by-Nation Relaxation of Controls
     and Harmonization of U.S. Export Control Policy on
               Cryptography with Export/Import
                  Policies of Other Nations

   The United States could give liberal export consideration
to products with encryption capabilities intended for sale to
recipients in a select set of nations,(7) exports to nations
outside this set would be restricted. Nations in the select
set would be expected to have a more or less uniform set of
regulations to control the export of cryptography, resulting
in a more level playing field for U.S. vendors. In addition,
agreements would be needed to control the re-export of
products with encryption capabilities outside this set of
nations.

   Nation-by-nation relaxation of controls is consistent with
the fact that different countries generally receive different
treatment under the U.S. export control regime for military
hardware. For example, exports of U.S. military hardware have
been forbidden to some countries because they were terrorist
nations, and to others because they failed to sign the nuclear
nonproliferation treaty. A harmonization of export control
regimes for cryptography would more closely resemble the
former CoCom approach to control dual-use items than the
approach reflected in the unilateral controls on exports
imposed by the USML.

   From the standpoint of U.S. national security and foreign
policy, a serious problem with harmonization is the fact that
the relationship between the United States and almost all
other nations has elements of both competition and cooperation
that may change over time. The widespread use of U.S. products
with strong encryption capabilities under some circumstances
could compromise U.S. positions with respect to these
competitive elements, although many of these nations are
unlikely to use U.S. products with encryption capabilities for
their most sensitive communications.

   Finally, as is true for other proposals to liberalize U.S.
export controls on cryptography, greater liberalization may
well cause some other nations to impose import controls where
they do not otherwise exist. Such an outcome would shift the
onus for impeding vendor interests away from the U.S.
government; however, depending on the nature of the resulting
import controls, U.S. vendors of information technology
products with encryption capabilities might be faced with the
need to conform to a multiplicity of import control regimes
established by different nations.

----------

   (7)  For example, products with encryption capabilities can
be exported freely to Canada without the need of a USML export
license if intended for domestic Canadian use.

____________________________________________________________


                  7.1.6 Liberal Export for
           Strong Cryptography with Weak Defaults

   An export control regime could grant liberal export
consideration to products with encryption capabilities
designed in such a way that the defaults for usage result in
weak or non-existent encryption (Box 7.1), but also so that
users could invoke options for stronger encryption through an
affirmative action.

   For example, such a product might be a telephone designed
for end-to-end security. The default mode of operation could
be set in two different ways. One way would be for the
telephone to establish a secure connection if the called party
has a comparable unit. The second way would be for the
telephone always to establish an insecure connection;
establishing a secure connection would require an explicit
action by the user. All experience suggests that the second
way would result in far fewer secure calls than the first
way.(8)

   An export policy favoring the export of encryption products
with weak defaults benefits the information-gathering needs of
law enforcement and signals intelligence efforts because of
user psychology. Many people, criminals and foreign government
workers included, often make mistakes by using products "out
of the box" without any particular attempt to configure them
properly. Such a policy could also take advantage of the
distribution mechanisms of the U.S. software industry to
spread weaker defaults.

   Experience to date suggests that good implementations of
cryptography for confidentiality are transparent and automatic
and thus do not require positive user action. Such
implementations are likely to be chosen by organizations that
are most concerned about confidentiality and that have a staff
dedicated to ensuring confidentiality (e.g., by resetting weak
vendor-supplied defaults). End users that obtain their
products with encryption capabilities on the retail store
market are the most likely to be affected by this proposal,
but such users constitute a relatively small part of the
overall market.

----------

   (8)  Of course, other techniques can be used to further
discourage the use of secure modes. For example, the telephone
could be designed to force the user to wait several seconds
for establishment of the secure mode.

____________________________________________________________


                  7.1.7 Liberal Export for
      Cryptographic Applications Programming Interfaces

   A cryptographic applications programming interface (CAPI;
see Appendix K) is a well-defined boundary between a baseline
product (such as an operating system, a database management
program, or a word-processing program) and a cryptography
module that provides a secure set of cryptographic services
such as authentication, digital signature generation, random
number generation, and stream or block mode encryption. The
use of a CAPI allows vendors to support cryptographic
functions in their products without actually providing them at
distribution.

   Even though such products have no cryptographic
functionality per se and are therefore not specifically
included in Category XIII of the ITAR (see Appendix L),
license applications for the export of products incorporating
CAPIs have in general been denied. The reason is that strong
cryptographic capabilities could be deployed on a vast scale
if U.S. vendors exported applications supporting a common CAPI
and a foreign vendor then marketed an add-in module with
strong encryption capabilities.(9)

   To meet the goals of less restrictive export controls,
liberal export consideration could be given to products that
incorporate a CAPI designed so that only "certified"
cryptographic modules could be incorporated into and used by
the application. That is, the application with the CAPI would
have to ensure that the CAPI would work only with certified
cryptographic modules. This could be accomplished by
incorporating into the application a check for a digital
signature whose presence would indicate that the add-on
cryptographic module was indeed certified; if and only if such
a signature were detected by the CAPI would the product allow
use of the module.

   One instantiation of a CAPI is the CAPI built into
applications that use the Fortezza card (discussed in Chapter
5). CAPI software for Fortezza is available for a variety of
operating systems and PC-card reader types; such software
incorporates a check to ensure that the device being used is
itself a Fortezza card. The Fortezza card contains a private
Digital Signature Standard (DSS) key that can be used to sign
a challenge from the workstation. The corresponding DSS public
key is made available in the CAPI, and thus the CAPI is able
to verify the authenticity of the Fortezza card.

   A second approach to the use of a CAPI has been proposed by
Microsoft and is now eligible for liberal export consideration
by the State Department (Box 7.2). The Microsoft approach
involves three components: an operating system with a CAPI
embedded within it, modules providing cryptographic services
through the CAPI, and applications that can call on the
modules through the CAPI provided by the operating system. In
principle, each of these components is the responsibility of
different parties: Microsoft is responsible for the operating
system, cryptography vendors are responsible for the modules,
and independent applications vendors are responsible for the
applications that run on the operating system.

   From the standpoint of national security authorities, the
effectiveness of an approach based on the use of a certified
CAPI/module combination depends on a number of factors. For
example, the product incorporating the CAPI should be known to
be implemented in a manner that enforces the appropriate
constraints on crypto-modules that it calls; furthermore, the
code that provides such enforcement should not be trivially
bypassed. The party certifying the crypto-module should
protect the private signature key used to sign it. Vendors
would still be required to support domestic and exportable
versions of an application if the domestic version was allowed
to use any module while the export version was restricted in
the set of modules that would be accepted, although the amount
of effort required to develop these two different versions
would be quite small.

   The use of CAPIs that check for appropriate digital
signatures would shift the burden for export control from the
applications or systems vendors to the vendors of the
cryptographic modules. This shift could benefit both the
government and vendors, because of the potential to reduce the
number of players engaged in the process. For example, all of
the hundreds of e-mail applications on the market could
quickly support encrypted e-mail by supporting a CAPI
developed by a handful of software and/or hardware
cryptography vendors. The cryptography vendors would be
responsible for dealing with the export and import controls of
various countries, leaving e-mail application vendors to
export freely anywhere in the world. Capabilities such as
escrowed encryption could be supported within the cryptography
module itself, freeing the applications or system vendor from
most technical, operational, and political issues related to
export control.

   A trustworthy CAPI would also help to support cryptography
policies that might differ among nations. In particular, a
given nation might specify certain performance requirements
for all cryptography modules used or purchased within its
borders.(10) International interoperability problems resulting
from conflicting national cryptography policies would still
remain.

----------

   (9)  This discussion refers only to "documented" or "open"
CAPIs, i.e., CAPls that are accessible to the end user.
Another kind of CAPI is "undocumented" and "closed", that is,
it is inaccessible to the end user, though it is used by
system developers for their own convenience. While a history
of export licensing decisions and practices supports the
conclusion that most products implementing "open" CAPls will
not receive export licenses, history provides no consistent
guidance with respect to products implementing CAPls that are
inaccessible to the end user.

   (10) An approach to this effect is the thrust of a proposal
from Hewlett-Packard. The Hewlett-Packard International
Cryptography Framework (ICF) proposal includes a stamp size
"policy card" (smart card) that would be inserted into a
cryptographic unit that is a part of a host system.
Cryptographic functions provided within the cryptographic unit
could be executed only with the presents of a valid policy
card. The policy card could be configured to enable only those
cryptographic functions that are consistent with government
export and local policies. The "policy card" allows for
managing the use of the integrated cryptography down to the
application specific level. By obtaining a new policy card,
customers could be upgraded to take advantage of varying
cryptographic capabilities as government policies or
organizational needs change. As part of an ICF solution, a
network security server could be implemented to provide a
range of different security services including verification of
the other three service elements (the card, the host system,
the cryptographic unit). Sources: Carl Snyder,
Hewlett-Packard, testimony to the NRC committee in February
1995; Hewlett-Packard, *International Cryptography Framework
White Paper*, February 1994.

____________________________________________________________


                  7.1.8 Liberal Export for
      Escrowable Products with Encryption Capabilities

   As discussed in Chapter 5, the Administration's proposal of
August 17, 1995, would allow liberal export consideration for
software products with encryption capabilities whose keys are
"properly escrowed." In other words, strong cryptography would
be enabled for these products only when the keys were escrowed
with appropriate escrow agents.

   An escrowed encryption product differs from what might be
called an "escrowable" product. Specifically, an escrowed
encryption product is one whose key must be escrowed with a
registered, approved agent before the use of (strong)
cryptography can be enabled, whereas an escrowable product is
one that provides full cryptographic functionality that
includes optional escrow features for the user. The user of an
escrowable product can choose whether or not to escrow the
relevant keys, but regardless of the choice, the product still
provides its full suite of encryption capabilities.(11)

   Liberal export consideration for escrowable products could
be granted and incentives promulgated to encourage the use of
escrow features. While the short-term disadvantage of this
approach from the standpoint of U.S. national security is that
it allows encryption stronger than the current 40-bit RC2/RC4
encryption allowed under present regulations to diffuse into
foreign hands, it has the long-term advantage of providing
foreign governments with a tool for influencing or regulating
the use of cryptography as they see fit. Currently, most
products with encryption capabilities do not have built-in
features to support escrow built into them. However, if
products were designed and exported with such features,
governments would have a hook for exercising some influence.
Some governments might choose to require the escrowing of
keys, while others might simply provide incentives to
encourage escrowing. In any event, the diffusion of escrowable
products abroad would raise the awareness of foreign
governments, businesses, and individuals about encryption and
thus lay a foundation for international cooperation on the
formulation of national cryptography policies.

----------

   (11) For example, an escrowable product would not enable
the user to encrypt files with passwords. Rather, the
installation of the product would require the user to create
a key or set of named keys, and these keys would be used when
encrypting files. The installation would also generate a
protected "safe copy" of the keys with instructions to the
user that they should register the key "somewhere." It would
be up to the user to decide where or whether to register the
key.

____________________________________________________________


       7.1.9 Alternatives to Government Certification
                   of Escrow Agents Abroad

   As discussed in Chapter 5, the Administration's August 1995
proposal focuses on an implementation of escrowed encryption
that involves the use of "escrow agents certified by the U.S.
government or by foreign governments with which the U.S.
government has formal agreements consistent with U.S. law
enforcement and national security requirements."(12) This
approach requires foreign customers of U.S. escrowed
encryption products to use U.S. escrow agents until formal
agreements can be negotiated that specify the responsibilities
of foreign escrow agents to the United States for law
enforcement and national security purposes.

   Skeptics ask what incentives the U.S. government would have
to conclude the formal agreements described in the August 1995
proposal if U.S. escrow agents would, by default, be the
escrow agents for foreign consumers. They believe that the
most likely result of adopting the Administration's proposal
would be U.S. foot-dragging and inordinate delays in the
consummation of formal agreements for certifying foreign
escrow agents. Appendix G describes some of the U.S.
government efforts to date to promote a dialogue on such
agreements.

   The approaches described below address problems raised by
certifying foreign escrow agents:

   +    *Informal arrangements for cooperation*. One
alternative is based on the fact that the United States enjoys
strong cooperative law enforcement relationships with many
nations with which it does not have formal agreements
regarding cooperation. Negotiation of a formal agreement
between the United States and another nation could be replaced
by presidential certification that strong cooperative law
enforcement relationships exist between the United States and
that nation. Subsequent cooperation would be undertaken on the
same basis that cooperation is offered today.

   +    *Contractual key escrow*. A second alternative is
based on the idea that formal agreements between nations
governing exchange of escrowed key information might be
replaced by private contractual arrangements.(13) A user that
escrows key information with an escrow agent, wherever that
agent is located, would agree contractually that the U.S.
government would have access to that information under a
certain set of carefully specified circumstances. A suitably
designed exportable product would provide strong encryption
only upon receipt of affirmative confirmation that the
relevant key information had been deposited with escrow agents
requiring such contracts with users. Alternatively, as a
condition of sale, end users could be required to deposit keys
with escrow agents subject to such a contractual requirement.

----------

   (12)  See Box 5.3, Chapter 5.

   (13)  Henry Perritt, "Transnational Key Escrow," paper
presented at the International Cryptography Institute,
Washington, D.C., September 22, 1995.

____________________________________________________________


           7.1.10 Use of Differential Work Factors
                       in Cryptography

   Differential work factor cryptography is an approach to
cryptography that presents different work factors to different
parties attempting to cryptanalyze a given piece of encrypted
information.(14) Iris Associates, the creator of Notes,
proposed such an approach for Lotus Notes Version 4 to
facilitate its export, and the U.S. govermnent has accepted
it. Specifically, the international edition of Lotus Notes
Version 4 is designed to present a 40-bit work factor to the
U.S. government and a 64-bit work factor to all other parties.
It implements this differential work factor by encrypting 24
bits of the 64-bit key with the public-key portion of an RSA
key pair held by the U.S. government. Because the U.S.
government can easily decrypt these 24 bits, it faces only a
40-bit work factor when it needs access to a communications
stream overseas encrypted by the international edition. All
other parties attempting to cryptanalyze a message face a
64-bit work factor.

   Differential work factor cryptography is similar to partial
key escrow (described in Chapter 5) in that both provide very
strong protection against most attackers but are vulnerable to
attack by some specifically chosen authority. However, they
are different in that differential work factor cryptography
does not require user interaction with an escrow agent, and so
it can offer strong cryptography "out of the box." Partial key
escrow offers all of the strengths and weaknesses of escrowed
encryption, including the requirement that the enabling of
strong cryptography does require interaction with an escrow
agent.

----------

   (14) Recall from Chapter 2 that a work factor is a measure
of the amount of work that it takes to undertake a brute-force
exhaustive cryptanalytic search.

____________________________________________________________


           7.1.11 Separation of Cryptography from
           other Items on the U.S. Munitions List

   As noted in Chapter 4, the inclusion of products with
encryption capabilities on the USML puts them on a par with
products intended for strictly military purposes (e.g., tanks,
missiles). An export control regime that authorized the U.S.
government to separate cryptography -- a true dual-use
technology -- from strictly military items would provide much
needed flexibility in dealing with nations on which the United
States wishes to place sanctions.


          7.2 ALTERNATIVES FOR PROVIDING GOVERNMENT
            EXCEPTIONAL ACCESS TO ENCRYPTED DATA


   Providing government exceptional access to encrypted data
is an issue with a number of dimensions, only some of which
relate directly to encryption.


           7.2.1 A Prohibition of the Use and Sale
   of Cryptography Lacking Features for Exceptional Access

   One obvious approach to ensuring government exceptional
access to encrypted information is to pass legislation that
forbids the use of cryptography lacking features for such
access, presumably with criminal penalties attached for
violation. (Given that escrowed cryptography appears to be the
most plausible approach to providing govermnent exceptional
access, the term "unescrowed cryptography" is used here as a
synonym for cryptography without features for exceptional
access.) Indeed, opponents of the Escrowed Encryption Standard
(EES) and the Clipper chip have argued repeatedly that the EES
approach would succeed only if alternatives were banned.(15)
Many concerns have been raised about the prospect of a
mandatory prohibition on the use of unescrowed cryptography.

   From a law enforcement standpoint, a legislative
prohibition on the use of unescrowed encryption would have
clear advantages. Its primary impact would be to eliminate the
commercial supply of unescrowed products with encryption
capabilities -- vendors without a market would most likely not
produce or distribute such products, thus limiting access of
criminals to unescrowed encryption and increasing the
inconvenience of evading a prohibition on use of unescrowed
encryption. At the same time, such a prohibition would leave
law-abiding users with strong concerns about the
confidentiality of their information being subject to
procedures beyond their control.

   A legislative prohibition of the use of unescrowed
encryption also raises specific technical, economic, and legal
issues.


Concerns About Personal Freedom

   The Clinton Administration has stated that it has no
intention of outlawing unescrowed cryptography, and it has
repeatedly and explicitly disavowed any intent to regulate the
domestic use of cryptography. However, no administration can
bind future administrations (a fact freely acknowledged by
administration officials). Thus, some critics of the
Administration position believe that the dynamics of the
encryption problem may well drive the government -- sooner or
later -- to prohibit the use of encryption without government
access.(16) The result is that the Administration is simply
not believed when it forswears any intent to regulate
cryptography used in the United States. Two related concerns
are raised:

   +    *The "slippery slope.*" Many skeptics fear that
current cryptography policy is the first step down a slippery
slope toward a more restrictive policy regime under which
government may not continue to respect limits in place at the
outset. An oft-cited example is current use of the Social
Security Number, which was not originally intended to serve as
a universal identifier when the Social Security Act was passed
in 1935 but has, over the last 50 years, come to serve exactly
that role by default, simply because it was there to be
exploited for purposes not originally intended by the enabling
legislation.

   +    *Misuse of deployed infrastructure for cryptography*.
Many skeptics are concerned that a widely deployed
infrastructure for cryptography could be used by a future
administration or Congress to promulgate and/or enforce
restrictive policies regarding the use of cryptography. With
such an infrastructure in place, critics argue that a simple
policy change might be able to transform a comparatively
benign deployment of technology into an oppressive one. For
example, critics of the Clipper proposal were concerned about
the possibility that a secure telephone system with government
exceptional access capabilities could, under a strictly
voluntary program to encourage its purchase and use, achieve
moderate market penetration. Such market penetration could
then facilitate legislation outlawing all other
cryptographically secure telephones.(17)

   Adding to these concerns are suggestions such as those made
by a responsible and senior government official that even
research in cryptography conducted in the civilian sector
should be controlled in a legal regime similar to that which
governs research with relevance to nuclear weapons design (Box
7.3). Ironically, former NSA Director Bobby Inman's comments
on scientific research appeared in an article that called for
greater cooperation between academic scientists and national
security authorities and used as a model of cooperation an
arrangement, recommended by the Public Cryptography Study
Group, that has worked generally well in balancing the needs
of academic science and those of national security.(18)
Nevertheless, Inman's words are often cited as reflecting a
national security mind-set that could lead to a serious loss
of intellectual freedom and discourse. More recently, FBI
Director Louis Freeh stated to the committee that "other
approaches may be necessary" if technology vendors do not
adopt escrowed encryption on their own. Moreover, the current
Administration has explicitly rejected the premise that "every
American, as a matter of right, is entitled to an unbreakable
encryption product."(19)

   Given concerns about possible compromises of personal and
civil liberties, many skeptics of government in this area
believe that the safest approach is for government to stay out
of cryptography policy entirely. They argue that any steps in
this area, no matter how well intentioned or plausible or
reasonable, must be resisted strongly, because such steps will
inevitably be the first poking of the camel's nose under the
tent.


Technical Issues

   Even if a legislative prohibition on the use of unescrowed
encryption were enacted, it would be technically easy for
parties with special needs for security to circumvent such a
ban. In some cases, circumvention would be explicitly illegal,
while in others it might well be entirely legal. For example:

   +    Software for unescrowed encryption can be downloaded
from the Internet; such software is available even today. Even
if posting such software in the United States were to be
illegal under a prohibition, it would nonetheless be
impossible to prevent U.S. Internet users from downloading
software that had been posted on sites abroad.

   +    Superencryption can be used. Superencryption
(sometimes also known as double encryption) is encryption of
traffic before it is given to an escrowed encryption device or
system. For technical reasons, superencryption is impossible
to detect without monitoring and attempting to decrypt all
escrow-encrypted traffic, and such large-scale monitoring
would be seriously at odds with the selected and limited
nature of wiretaps today.

   An additional difficulty with superencryption is that it is
not technically possible to obtain escrow information for all
layers simultaneously, because the fact of double and triple
encryption cannot be known in advance. Even if the second (or
third or fourth) layers of encryption were escrowed, law
enforcement authorities would have to approach separately and
sequentially the escrow agents holding key information for
those layers.

   +    Talent for hire is easy to obtain. A criminal party
could easily hire a knowledgable person to develop needed
software. For example, an out-of-work or underemployed
scientist or mathematician from the former Soviet Union would
find a retainer fee of $500 per month to be a king's
ransom.(20)

   +    Information can be stored remotely. An obvious
noncryptographic circumvention is to store data on a remote
computer whose Internet address is known only to the user.
Such a computer could be physically located anywhere in the
world (and might even automatically encrypt files that were
stored there). But even if it were not encrypted, data stored
on a remote computer would be impossible for law enforcement
officials to access without the cooperation of the data's
owner. Such remote storage could occur quite legally even with
a ban on the use of unescrowed encryption.

   +    Demonstrating that a given communication or data file
is "encrypted" is fraught with ambiguities arising from the
many different possibilities for sending information:

        -- An individual might use an obscure data format. For
        example, while ASCII is the most common representation
        of alphanumeric characters today, Unicode (a proposed
        16-bit representation) and EBCDIC (a more-or-less
        obsolete 8-bit representation) are equally good for
        sending plain English text.

        -- An individual talking to another individual might
        speak in a language such as Navajo.

        -- An individual talking to another individual might
        speak in code phrases.

        -- An individual might send compressed digital data
        that could easily be confused with encrypted data
        despite having no purpose related to encryption. If,
        for example, an individual develops his own good
        compression algorithm and does not share it with
        anyone, that compressed bit stream may prove as
        difficult to decipher as an encrypted bit stream.(21)

        -- An individual might deposit fragments of a text or
        image that he wished to conceal or protect in a number
        of different Internet-accessible computers. The
        plaintext (i.e., the reassembled version) would be
        reassembled into a coherent whole only when downloaded
        into the computer of the user.(22)

        -- An individual might use steganography.(23)

   None of these alternative coding schemes provides
confidentiality as strong as would be provided by good
cryptography, but their extensive use could well complicate
attempts by government to obtain plaintext information.

   Given so many different ways to subvert a ban on the use of
unescrowed cryptography, emergence of a dedicated subculture
is likely in which the nonconformists would use coding schemes
or unescrowed cryptography impenetrable to all outsiders.


Economic Concerns

   An important economic issue that would arise with a
legislative prohibition on the use of unescrowed cryptography
would involve the political difficulty of mandating
abandonment of existing user investments in products with
encryption capabilities. These investments, considerable even
today, are growing rapidly, and the expense to users of
immediately having to replace unescrowed encryption products
with escrowed ones could be enormous;(24) a further expense
would be the labor cost involved in decrypting existing
encrypted archives and reencrypting them using escrowed
encryption products. One potential mitigating factor for cost
is the short product cycle of information technology products.
Whether users would abandon nonconforming products in favor of
new products with escrowing features -- knowing that they were
specifically designed to facilitate exceptional access -- is
open to question.


Legal and Constitutional Issues

   Even apart from the issues described above, which in the
committee's view are quite significant, a legislative ban on
the domestic use of unescrowed encryption would raise
constitutional issues. Insofar as a prohibition on unescrowed
encryption were treated for constitutional purposes as a
limitation on the content of communications, the government
would have to come forward with a compelling state interest to
justify the ban. To some, a prohibition on the use of
unescrowed encryption would be the equivalent of a law
proscribing use of a language (e.g., Spanish), which would
almost certainly be unconstitutional. On the other hand, if
such a ban were regarded as tantamount to eliminating a method
of communication (i.e., were regarded as content-neutral),
then the courts would employ a simple balancing test to
determine its constitutionality. The government would have to
show that the public interests were jeopardized by a world of
unrestrained availability of encryption, and these interests
would have to be weighed against the free speech interests
sacrificed by the ban. It would also be significant to know
what alternative forms of methods of anonymous communication
would remain available with a ban and how freedom of speech
would be affected by the specific system of escrow chosen by
the government. These various considerations are difficult,
and in some cases impossible, to estimate in advance of
particular legislation and a particular case, but the First
Amendment issues likely to arise with a total prohibition on
the use of unescrowed encryption are not trivial.(25)

   A step likely to raise fewer constitutional problems, but
not eliminate them, is one that would impose restrictions on
the commercial sale of unescrowed products with encryption
capabilities.(26) Under such a regime, products with
encryption capabilities eligible for sale would have to
conform to certain restrictions intended to ensure public
safety, in much the same way that other products such as
drugs, automobiles, and meat must satisfy particular
government regulations. "Freeware" or home-grown products with
encryption capabilities would be exempt from such regulations
as long as they were used privately. The problem of
already-deployed products would remain, but in a different
form: new products would either interoperate or not
interoperate with existing already-deployed products. If
noninteroperability were required, users attempting to
maintain and use two noninteroperating systems would be faced
with enormous expenses. If interoperability were allowed, the
intent of the ban would be thwarted.

   Finally, any national policy whose stated purpose is to
prevent the use of unescrowed encryption preempts decision
making that the committee believes properly belongs to users.
As noted in Chapter 5, escrowed encryption reduces the level
of assured confidentiality in exchange for allowing controlled
exceptional access to parties that may need to retrieve
encrypted data. Only in a policy regime of voluntary
compliance can users decide how to make that trade-off. A
legislative prohibition of the use or sale of unescrowed
encryption would be a clear statement that law enforcement
needs for exceptional access to information clearly outweigh
user interests in having maximum possible protection for their
information, a position that has yet to be defended or even
publicly argued by any player in the debate.

----------

   (15) For example, see Electronic Privacy Information
Center, press release, August 16, 1995, available at
http://www.epic.org.

   (16) For example, Senator Charles Grassley (R-IA)
introduced legislation (The Anti-Electronic Racketeering Act
of 1995) on June 27, 1995, to "prohibit certain acts involving
the use of computers in the furtherance of crimes." The
proposed legislation makes it unlawful "to distribute computer
software that encodes or encrypts electronic or digital
communications to computer networks that the person
distributing the software knows or reasonably should know, is
accessible to foreign nationals and foreign governments,
regardless of whether such software has been designated as
nonexportable," except for software that uses "a universal
decoding device or program that was provided to the Department
of Justice prior to the distribution."

   (17) By contrast, a deployed infrastructure could have
characteristics that would make it quite difficult to
implement policy changes on a short time scale. For example,
it would be very difficult to implement a policy change that
would change the nature of the way in which people use today's
telephone system. Not surprisingly, policy makers would prefer
to work with infrastructures that are quickly responsive to
their policy preferences.

   (18) The arrangement recommended by the Public Cryptography
Study Group called for voluntary prepublication review of all
cryptography research undertaken in the private sector. For
more discussion of this arrangement, see Public Cryptography
Study Group, *Report of the Public Cryptography Study Group*,
American Council on Education, Washington, D.C., February,
1981. A history leading to the formation of the Public
Cryptography Study group can be found in National Research
Council, "Voluntary Restraints on Research With National
Security Implications: The Case of Cryptography, 1972-1982,"
in *Scientifc Communication and National Security*, National
Academy Press, Washington, D.C., 1982, Appendix E, pp.
120-125. The ACM study on cryptography policy concluded that
this prepublication arrangement has not resulted in any
chilling effects in the long term (see Susan Landau et al.,
*Codes, Keys and Conflicts: Issues in U.S. Crypto Policy*,
ACM, New York, 1994, p. 39.)

   (19) "Questions and Answers About the Clinton
Administration's Telecommunications Initiative," undated
document. Released on April 16, 1993, with the "Statement by
the Press Secretary on the Clipper Chip." See *The Third CPSR
Cryptography and Privacy Conference Source Book*, June 7,
1993, Part III.

   (20) Alan Cooperman and Kyrill Belianinov, "Moonlighting by
Modem in Russia," *U.S. News & World Report*, April 17, 1995,
pp. 45-48. In addition, many high-technology jobs are moving
overseas in general, not just to the former Soviet Union. See
for example, Keith Bradsher, "Skilled Workers Watch Their Jobs
Migrate Overseas," *New York Times*, August 28, 1995, p. 1.

   (21) A discussion of using text compression for
confidentiality purposes can be found in Ian Whitten and John
Cleary, "On the Privacy Afforded by Adaptive Text
Compression," *Computers and Security*, July 1988, Volume
7(4), pp. 397-408. One problem in using compression schemes as
a technique for ensuring confidentiality is that almost any
practical compression scheme has the characteristic that
closely similar plaintexts would generate similar ciphertexts,
thereby providing a cryptanalyst with a valuable advantage not
available if a strong encryption algorithm is used.

   (22) Jaron Lanier, "Unmuzzling the Internet: How to Evade
the Censors and Make a Statement, Too," OpEd, *New York
Times*, January 2, 1996, p. A-15.

   (23) Steganography is the name given to techniques for
hiding a message within another message. For example, the
first letter of each word in a sentence or a paragraph can be
used to spell out a message, or a photograph can be
constructed so as to conceal information. Specifically, most
black-and-white pictures rendered in digital form use at most
2^16 (65,536) shades of gray, because the human eye is
incapable of distinguishing any more shades. Each element of
a digitized black-and-white photo would then be associated
with 16 bits of information about what shade of gray should be
used. If a picture were digitized with 24 bits of gray scale,
the last 8 bits could be used to convey a concealed message
that would never appear except for someone who knew to look
for it. The digital size of the picture would be 50% larger
than it would ordinarily be, but no one but the creator of the
image would know.

   (24) Existing unescrowed encryption products could be kept
in place if end users could be made to comply with a
prohibition of the use of such products. In some cases, a
small technical fix might suffice to disable the cryptography
features of a system; such fixes would be most relevant in a
computing environment in which the software used by end users
is centrally administered (as in the case of many
corporations) and provides system administrators with the
capability for turning off encryption. In other cases, users
-- typically individual users who had purchased their products
from retail store outlets -- would have to be trusted to
refrain from using encryption.

   (25) For a view arguing that relevant Fourth and Fifth
Amendment issues would be resolved against a constitutionality
of such a prohibition, see Michael Froomkin, "The Metaphor Is
the Key: Cryptography, The Clipper Chip and the Constitution,"
*University of Pennsylvania Law Review*, Volume 143(3),
January 1995, pp. 709-897. The committee takes no position on
these Fourth and Fifth Amendment issues.

   (26) Such a scheme has been suggested by Dorothy Denning in
"The Future of Cryptography," *Internet Security Monthly*,
October 1995, p. 10. (Also available from
http://www.cosc.georgetown.edu/~denning/crypto.) Denning's
paper does not suggest that "freeware" be exempt, although her
proposal would provide an exemption for personally developed
software used to encrypt personal files.

____________________________________________________________


      7.2.2 Criminalization of the Use of Cryptography
                in the Commission of a Crime

   Proposals to criminalize the use of cryptography in the
commission of a crime have the advantage that they focus the
weight of the criminal justice system on the "bad guy" without
placing restrictions on the use of cryptography by "good
guys." Further, deliberate use of cryptography in the
commission of a crime could result in considerable damage,
either to society as a whole or to particular individuals, in
circumstances suggesting premeditated wrongdoing, an act that
society tends to view as worthy of greater punishment than a
crime committed in the heat of the moment.

   Two approaches could be taken to criminalize the use of
cryptography in the commission of a crime:

   +    Construct a specific list of crimes in which the use
of cryptography would subject the criminal to additional
penalties. For example, using a deadly weapon in committing a
robbery or causing the death of someone during the commission
of a crime are themselves crimes that lead to additional
penalties.

   +    Develop a blanket provision stating that the use of
cryptography for illegal purposes (or for purposes contrary to
law) is itself a felony.

   In either event, additional penalties for the use of
cryptography could be triggered by a conviction for a primary
crime, or they could be imposed independently of such a
conviction. Precedents include the laws criminalizing mail
fraud (fraud is a crime, generally a state crime, but mail
fraud -- use of the mails to commit fraud -- is an additional
federal crime) and the use of a gun during the commission of
a felony.

   Intentional use of cryptography in the concealment of a
crime could also be criminalized. Since the use of
cryptography is a prima facie act of concealment, such an
expansion would reduce the burden of proof on law enforcement
officials, who would have to prove only that cryptography was
used intentionally to conceal a crime. Providers of
cryptography would be criminally liable only if they had
knowingly provided cryptography for use in criminal activity.
On the other hand, a law of more expansive scope might well
impose additional burdens on businesses and raise civil
liberties concerns.

   In considering legal penalties for misuse of cryptography,
the question of what it means to "use" cryptography must be
addressed. For example, if and when encryption capabilities
are integrated seamlessly into applications and are invoked
automatically without effort on the part of a user, should the
use of these applications for criminal purposes lead to
additional penalties or to a charge for an additional offense?
Answering yes to this question provides another avenue for
prosecuting a criminal (recall that Al Capone was convicted
for income tax evasion rather than bank robbery). Answering no
leaves open the possibility of prosecutorial abuse. A second
question is what counts as "cryptography." As noted above in
the discussion of prohibiting unescrowed encryption, a number
of mathematical coding schemes can serve to obscure the
meaning of plaintext even if they are not encryption schemes
in the technical sense of the word. These and related
questions must be addressed in any serious consideration of
the option for criminalizing the use of cryptography in the
commission of a crime.


            7.2.3 Technical Non-Escrow Approaches
             for Obtaining Access to Information

   Escrowed encryption is not the only means by which law
enforcement can gain access to encrypted data. For example, as
advised by Department of Justice guidelines for searching and
seizing computers, law enforcement officials can approach the
software vendor or the Justice Department computer crime
laboratory for assistance in cryptanalyzing encrypted files.
These guidelines also advise that "clues to the password [may
be found] in the other evidence seized -- stray notes on
hardware or desks; scribble in the margins of manuals or on
the jackets of disks. Agents should consider whether the
suspect or someone else will provide the password if
requested."(27) Moreover, product designs intended to
facilitate exceptional access can include alternatives with
different strengths and weaknesses such as link encryption,
weak encryption, hidden back doors, and translucent
cryptography.


Link Encryption

   With link encryption, which applies only to communications
and stands in contrast to end-to-end encryption (Box 7.4), a
plaintext message enters a communications link, is encrypted
for transmission through the link, and is decrypted upon
exiting the link. In a communication that may involve many
links, sensitive information can be found in plaintext form at
the ends of each link (but not during transit). Thus, for
purposes of protecting sensitive information on an open
network accessible to anyone (the Internet is a good example),
link encryption is more vulnerable than end-to-end encryption,
which protects sensitive information from the moment it leaves
party A to the moment it arrives at party B. However, from the
standpoint of law enforcement, link encryption facilitates
legally authorized intercepts, because the traffic of interest
can always be obtained from one of the nodes in which the
traffic is unencrypted.

   On a relatively closed network or one that is used to
transmit data securely and without direct user action, link
encryption may be cost-effective and desirable. A good example
is encryption of the wireless radio link between a GSM
cellular telephone and its ground station; the cellular
handset encrypts the voice signal and transmits it to the
ground station, at which point it is decrypted and fed into
the land-based network. Thus, the landbased network carries
only unencrypted voice traffic, even though it was transmitted
by an encrypted cellular telephone. A second example is the
"bulk" encryption of multiple channels -- each individually
unencrypted -- over a multiplexed fiber-optic link. In both of
these instances of link encryption, only those with access to
carrier facilities -- presumably law enforcement officials
acting under proper legal authorization -- would have the
opportunity to tap such traffic.


Weak Encryption

   Weak encryption allowing exceptional access would have to
be strong enough to resist brute-force attack by unauthorized
parties (e.g., business competitors) but weak enough to be
cracked by authorized parties (e.g., law enforcement
agencies). However, "weak" encryption is a moving target. The
difference between cracking strong and weak encryption by
brute-force attack is the level of computational resources
that can be brought to such an attack, and those resources are
ever increasing. In fact, the cost of brute-force attacks on
cryptography drops exponentially over time, in accordance with
Moore's law.(28)

   Widely available technologies now enable multiple
distributed workstations to work collectively on a
computational problem at the behest of only a few people; Box
4.6 in Chapter 4 discusses the brute-force cryptanalysis of
messages encrypted with the 40-bit RC4 algorithm, and it is
not clear that the computational resources of unauthorized
parties can be limited in any meaningful way. In today's
environment, unauthorized parties will almost always be able
to assemble the resources needed to mount successful
brute-force attacks against weak cryptography, to the
detriment of those using such cryptography. Thus, any
technical dividing line between authorized and unauthorized
decryption would change rather quickly.


Hidden Back Doors

   A "back door" is an entry point to an application that
permits access or use by other than the normal or usual means.
Obviously, a back door known to government can be used to
obtain exceptional access. Back doors may be open or hidden.
An open back-door is one whose existence is announced
publicly; an example is an escrowed encryption system, which
everyone knows is designed to allow exceptional access.(29) By
its nature, an open back-door is explicit; it must be
deliberately and intentionally created by a designer or
implementer.

   A hidden back-door is one whose existence is not widely
known, at least upon initial deployment. It can be created
deliberately (e.g., by a designer who insists on retaining
access to a system that he may have created) or accidentally
(e.g., as the result of a design flaw). Often, a user wishing
access through a deliberately created hidden back-door must
pass through special system-provided authorization services.
Almost by definition, an accidentally created hidden back-door
requires no special authorization for its exploitation,
although finding it may require special knowledge. In either
case, the existence of hidden back-doors may or may not be
documented; frequently, it is not.

   Particularly harmful hidden back-doors can appear when
"secure" applications are implemented using insecure operating
systems; more generally, "secure" applications layered on top
of insecure systems may not be secure in practice.
Cryptographic algorithms implemented on weak operating systems
present another large class of back doors that can be used to
undermine the integrity and the confidentiality that
cryptographic implementations are intended to provide. For
example, a database application that provides strong access
control and requires authorization for access to its data
files but is implemented on an operating system that allows
users to view those files without going through the database
application does not provide strong confidentiality. Such an
application may well have its data files encrypted for
confidentiality.

   The existence of back doors can pose high-level risks. The
shutdown or malfunction of life-critical systems, loss of
financial stability in electronic commerce, and compromise of
private information in database systems can all have serious
consequences. Even if back doors are undocumented, they can be
discovered and misused by insiders or outsiders. Reliance on
"security by obscurity" is always dangerous, because trying to
suppress knowledge of a design fault is generally very
difficult. If a back door exists, it will eventually be
discovered, and its discoverer can post that knowledge
worldwide. If systems containing a discovered back door were
on the Internet or were accessible by modem, massive
exploitation could occur almost instantaneously, worldwide. If
back doors lack a capability for adequate authentication and
accountability, then it can be very difficult to detect
exploitation and to identify the culprit.


Translucent Cryptography

   Translucent cryptography has been proposed by Ronald Rivest
as an alternative to escrowed encryption.(30) The proposed
technical scheme, which involves no escrow of unit keys, would
ensure that any given message or file could be decrypted by
the government with probability p; the value of p (0 <p < 1)
would be determined by the U.S. Congress. In other words, on
average, the government would be able to decrypt a fraction p
of all messages or files to which it was given legal access.
Today (without encryption), p = 1. In a world of strong
(unescrowed) encryption, p = 0. A large value of p favors law
enforcement, while a small value of p favors libertarian
privacy. Rivest proposes that some value of p balances the
interests on both sides.

   It is not necessary that the value of p be fixed for all
time or be made uniform for all devices. p could be set
differently for cellular telephones and for e-mail, or it
could be raised or lowered as circumstances dictated. The
value of p would be built into any given encryption device or
program.

   Note that in contrast to escrowed encryption, translucent
cryptography requires no permanent escrowing of unit keys,
although it renders access indeterminate and probabilistic.

----------

   (27) Criminal Division, U.S. Department of Justice,
*Federal Guidelines for Searching and Seizing Computers*,
Washington, D.C., July 1994, p. 55.

   (28) Moore's law is an empirical observation that the cost
of computation drops by a factor of two approximately every 18
months.

   (29) Of course, the fact that a particular product is
escrowed may not necessarily be known to any given user. Many
users learn about the features of a product through reading
advertisments and operating manuals for the product; if these
printed materials do not mention the escrowing features, and
no one tells the user, he or she may well remain ignorant of
them, even though the fact of escrow is "public knowledge."

   (30) Ronald Rivest, *Translucent Cryptography. An
Alternative to Key Escrow*, paper presented at the Crypto 1995
Rump Session, August 29, 1995.

____________________________________________________________


               7.2.4 Network-based Encryption

Security for Voice Communications

   In principle, secure telephony can be made the
responsibility of telephone service providers. Under the
current regulatory regime (changing even as this report is
being written), tariffs often distinguish between data and
voice. Circuits designated as carrying ordinary voice (also to
include fax and modem traffic) could be protected by
encryption supplied by the service provider, perhaps as an
extra security option that users could purchase. Common
carriers (service providers in this context) that provide
encryption services are required by the Communications
Assistance for Law Enforcement Act to decrypt for law
enforcement authorities upon legal request. (The "trusted
third party" (TTP) concept discussed in Europe(31) is similar
in the sense that TTPs are responsible for providing key
management services for secure communications. In particular,
TTPs provide session keys over secure channels to end users
that they can then use to encrypt communications with parties
of interest; these keys are made available to law enforcement
officials upon authorized request.)

   The simplest version of network-based encryption would
provide for link encryption (e.g., encrypting the voice
traffic only between switches). Link encryption would leave
the user vulnerable to eavesdropping at a point between the
end-user device and the first switching office. In principle,
a secure end-user device could be used to secure this "last
mile" link.(32)

   Whether telecommunications service providers will move
ahead on their own with network-based encryption for voice
traffic is uncertain for a number of reasons. Because most
people today either believe that their calls are reasonably
secure or are not particularly concerned about the security of
their calls, the extent of demand for such a service within
the United States is highly uncertain. Furthermore, by moving
ahead in a public marmer with voice encryption, telephone
companies would be admitting that calls carried on their
network are today not as secure as they could be; such an
acknowledgment might undermine their other business interests.
Finally, making network-based encryption work internationally
would remain a problem, although any scheme for ensuring
secure international communications will have drawbacks.

   More narrowly focused network-based encryption could be
used with that part of the network traffic that is widely
acknowledged to be vulnerable to interception -- namely,
wireless voice communications. Wireless communications can be
tapped "in the ether" on an entirely passive basis, without
the knowledge of either the sending or receiving party. Of
particular interest is the cellular telephone network; all of
the current standards make some provisions for encryption.
Encryption of the wireless link is also provided by the GSM,
an European standard for mobile communications. In general,
communication is encrypted from the mobile handset to the
cell, but not end to end. Structured in this manner,
encryption would not block the ability of law enforcement to
obtain the contents of a call, because access could always be
obtained by tapping the ground station.

   At present, transmission of most wireless communications is
analog. Unless special measures are taken to prevent
surveillance, analog transmissions are relatively easy to
intercept. However, it is widely expected that wireless
communications will become increasingly digital in the future,
with two salutary benefits for security. One is that compared
to analog signals, even unencrypted digital communications are
difficult for the casual eavesdropper to decipher or
interpret, simply because they are transmitted in digital
form. The second is that digital communications are relatively
easy to encrypt.

Security for Dafa Communications

   The body responsible for determining technical standards
for Internet communications, the Internet Engineering Task
Force, has developed standards for the Internet Protocol
(version 6, also known as IPv6) that require conforming
implementations to have the ability to encrypt data packets,
with the default method of encryption being DES.(33) However,
IPv6 standards are silent with respect to key management, and
so leave open the possibility that escrow features might or
might not be included at the vendor's option.

   If the proposed standards are finalized, vendors may well
face a Hobson's choice: to export Internet routing products
that do not conform to the IPv6 standard (to obtain favorable
treatment under the current ITAR, which do not allow
exceptions for encryption stronger than 40-bit with RC2 or
RC4), or to develop products that are fully compliant with
IPv6 (a strong selling point), but only for the domestic
market. Still, escrowed implementations of IPv6 would be
consistent with the proposed standard and might be granted
commodities jurisdiction to the Commerce Control List under
regulations proposed by the Administration for escrowed
encryption products.

----------

   (31) See for example, Nigel Jefferies, Chris Mitchell, and
Michael Walker, "A Proposed Architecture for Trusted Third
Party Services," Royal Holloway, University of London, 1995.

   (32) The "last mile" is a term describing that part of a
local telephone network between the premises of an individual
subscriber and the central-office switch from which service is
received. The vulnerability of the "last mile" is increased
because it is easier to obtain access to the physical
connections and because the volume of traffic is small enough
to permit the relevant traffic to be isolated easily. On the
other hand, the vulnerability of the switch is increased
because it is often accessible remotely through dial-in ports.

   (33) The Network Working Group has described protocols that
define standards for encryption, authentication, and integrity
in the Internet Protocol. These protocols are described in the
following documents, issued by the Network Working Group as
Requests for Comments (RFCs) in August 1995:

RFC       Title

1825    Security Architecture for the Internet Protocol;
        describes the security mechanisms for IP version 4
        (IPv4) and IP version 6 (IPv6)).

1826    IP Authentication Header (AH; describes a mechanism
        for providing cryptographic authentication for IPv4
        and IPv6 datagrams).

1827    IP Encapsulating Security Payload (ESP; describes a
        mechanism that works in both IPv4 and IPv6 for
        providing integrity and confidentiality to IP
        datagrams).

1828    IP Authentication using Keyed MD5; describes the use
        of a particular authentication technique with IP-AH.

1829    The ESP DES-CBC Transform; describes the use of a
        particular encryption technique with the IP
        Encapsulating Security Payload (ESP)).

These documents are available from ftp://ds.internic.net/
rfc/rfcNNNN.txt, where NNNN is the RFC number.

____________________________________________________________


        7.2.5 Distinguishing Between Encrypted Voice
              and Data Communications Services
                   for Exceptional Access

   For purposes of allowing exceptional access, it may be
possible to distinguish between encrypted voice and data
communications, at least in the short run. Specifically, a
proposal by the JASON study group suggests that efforts to
install features for exceptional access should focus on secure
voice communications, while leaving to market forces the
evolution of secure data communications and storage.(34) This
proposal rests on the following propositions:

   +    Telephony, as it is experienced by the end user, is a
relatively mature and stable technology, compared to data
communications services that evolve much more rapidly. Many
people -- perhaps the majority of the population -- will
continue to use devices that closely resemble the telephones
of today, and many more people are familiar with telephones
than are familiar with computers or the Internet.

   An important corollary is that regulation of rapidly
changing technologies is fraught with more danger than is the
regulation of mature technologies, simply because regulatory
regimes are inherently slow to react and may well pose
significant barriers to the development of new technologies.
This is especially true in a field moving as rapidly as
information technology.

   +    Telephony has a long-standing regulatory and technical
infrastructure associated with it, backed by considerable
historical precedent, such as that for law enforcement
officials obtaining wiretaps on telephonic communications
under court order. By contrast, data communications services
are comparatively unregulated (Box 7.5).

   +    In remarks to the committee, FBI Director Louis Freeh
pointed out that it was voice communications that drove the
FBI's desire for passage of the Communications Assistance for
Law Enforcement Act (CALEA); he acknowledged that other
mechanisms for communication might be relevant to law
enforcement investigations but has undertaken non-legislative
approaches to deal with those mechanisms.

   +    Demand for secure telephone communications, at least
domestically, is relatively small, if only because most users
consider today's telephone system to be relatively secure. A
similar perception of Internet security does not obtain today,
and thus the demand for highly secure data communications is
likely to be relatively greater and should not be the subject
of government interference.

   Under the JASON proposal, attempts to influence the
inclusion of escrow features could affect only the hardware
devices that characterize telephony today (e.g., a dedicated
fax device, an ordinary telephone). In general, these devices
do now allow user programming or additions, and in particular,
lack the capability enabling the user to provide encryption
easily.

   The JASON study also recognized that technical trends in
telecommunications are such that telephony will be
increasingly indistinguishable from data communications. One
reason is that communications are becoming increasingly
digital. A bit is a bit, whether it was originally part of a
voice communication or part of a data communication, and the
purpose of a communications infrastructure is to transport
bits from Point A to Point B, regardless of the underlying
information content; reconstituting the transported bits into
their original form will be a task left to the parties at
Point A and Point B. Increasingly, digitized signals for
voice, data, images, and video will be transported in similar
ways over the same network facilities, and often they will be
combined into single multiplexed streams of bits as they are
carried along.(35)

   For example, a voice-generated analog sound wave that
enters a telephone may be transmitted to a central switching
office, at which point it generally is converted into a
digital bit stream and merged with other digital traffic that
may originally have been voices, television signals, and
high-speed streams of data from a computer. The network
transports all of this traffic across the country by a
fiber-optic cable and converts the bits representing voice
back into an analog signal only when it reaches the switching
office that serves the telephone of the called party. To a
contemporary user of the telephone, the conversation proceeds
just as it might have done 30 years ago (although probably
with greater fidelity), but the technology used to handle the
call is entirely different.

   Alternatively, a computer connected to a data network can
be converted into the functional equivalent of a
telephone.(36) Some on-line service providers will be offering
voice communications capability in the near future, and the
Internet itself can be used today to transport real-time voice
and even video communications, albeit with relatively low
fidelity and reliability but also at very low cost.3' Before
these modalities become acceptable for mainstream purposes,
the Internet (or its successor) will have to implement on a
wide scale new protocols and switching services to eliminate
current constraints that involve time delays and bandwidth
limitations.

   A second influence that will blur the distinction between
voice and data is that the owners of the devices and lines
that transport bits today are typically the common carriers --
firms originally formed to carry long-distance telephone calls
and today subject to all of the legal requirements imposed on
common carriers (see Box 7.5). But these firms sell transport
capacity to parties connecting data networks, and much of
today's bulk data traffic is carried over communications links
that are owned by the common carriers. The Telecommunications
Reform Act of 1996 will further blur the lines among service
providers.

   The lack of a technical boundary between telephony and data
communications is the result of the way in which today's
networks are constructed. Networks are built upon a protocol
"stack" that embodies protocols at different layers of
abstraction. At the very bottom are the protocols for the
physical layer that define the voltages and other physical
parameters that represent ones and zeros. On top of the
physical layer are other protocols that provide higher-level
services by making use of the physical layer. Because the bulk
of network traffic is carried over a physical infrastructure
that was designed for voice communications (i.e., the public
switched telecommunications network), interactions at the
physical layer can be quite naturally regarded as being in the
domain of "voice." But interactions at higher layers in the
stack are more commonly associated with "data."

   Acknowledging these difficulties, the JASON study concluded
that limiting efforts to promote escrowed encryption products
to those associated with voice communications had two
important virtues. First, it would help to preserve law
enforcement needs for access to a communications mode --
namely telephony -- that is widely regarded as important to
law enforcement. Second, it would avoid premature government
regulation in the data services area (an area that is less
important historically to criminal investigation and
prosecution than is telephony), thus avoiding the damage that
could be done to a strong and rapidly evolving U.S.
information technology industry. It would take -- several
years to a decade -- for the technical "loopholes" described
above to become significant, thus giving law enforcement time
to adapt to a new technical reality.

----------

   (34) JASON Encryption/Privacy Study, Report JSR-93-520
(unpublished), JASON Program Office, MITRE Corporation,
McLean, Virginia, 1993.

   (35) Note, however, that the difficulty of searching for a
given piece of information does depend on whether it is voice
or text. It is quite straightforward to search a given digital
stream for a sequence of bits that represents a particular
word as text, but quite difficult to search a digital stream
for a sequence of bits that represents that particular word as
voice.

   (36) For example, an IBM catalog offers for general
purchase a "DSP Modem and Audio Card" with "Telephony
Enhancement" that provides a full-duplex speaker telephone for
$254. The card is advertised as being able to make the
purchaser's PC into "a telephone communications center with
telephone voice mail, caller ID, and full duplex speakerphone
capability (for true simultaneous, two-way communications)."
See The IBMPC Direct Source Book, Fall 1994, p. 43. An article
in the Hewlett-Packard Journal describes the ease with which
a telephone option card was developed for a workstation; see
S. Paul Tucker, "HP TeleShare: Integrating Telephone
Capabilities on a Computer Workstation," *Hewlett-Packard
Journal*, April 1995, pp. 69-74.

   (37) In January 1996, it was estimated that approximately
20,000 people world-wide are users of Internet telephone
service. See Mike Mills, "It's the Net's Best Thing to Being
There," *Washington Post*, January 23, 1996, p. C-1.

____________________________________________________________


           7.2.6 A Centralized Decryption Facility
              for Government Exceptional Access

   Proposed procedures to implement the retrieval of keys
escrowed under the Clipper initiative call for the escrowed
key to be released by the escrow agencies to the requesting
law enforcement authorities upon presentation of proper legal
authorization, such as a court order. Critics have objected to
this arrangement because it potentially compromises keys for
all time -- that is, once the key to a specific telephone has
been divulged, it is in principle possible to eavesdrop
forever on conversations using. that telephone, despite the
fact that court-ordered wiretaps must have a finite duration.

   To counter this criticism, administration officials have
designed a plan that calls for keys to be transmitted
electronically to EES-decryption devices in such a way that
the decryption device will erase the key at the time specified
in the court order. However, acceptance of this plan relies on
assurances that the decryption device would indeed work in
this manner. In addition, this proposal is relevant only to
the final plan -- the interim procedures specify manual key
handling.

   Another way to counter the objection to potential
long-lasting compromise of keys involves the use of a
centralized government-operated decryption facility. Such a
facility would receive EES-encrypted traffic forwarded by law
enforcement authorities and accompanied by appropriate legal
authorization. Keys would be made available by the escrow
agents to the facility rather than to the law enforcement
authorities themselves, and the plaintext would be returned to
the requesting authorities. Thus, keys could never be kept in
the hands of the requesting authorities, and concern about
illicit retention of keys by law enforcement authorities could
be reduced. Of course, concerns about retention by the
decryption facility would remain, but since the number of
decryption facilities would be small compared to the number of
possible requesting law enforcement authorities, the problem
would be more manageable. Since the decryption facilities
would likely be under centralized control as well, it would be
easier to promulgate and enforce policies intended to prevent
abuse.(38)

   One important aspect of this proposal is that the
particular number of facilities constructed and the capacity
of each could limit the number of simultaneous wiretaps
possible at any given time. Such a constraint would force law
enforcement authorities to exercise great care in choosing
targets for interception, just as they must when they are
faced with constraints on resources in prosecuting cases. A
result could be greater public confidence that only wiretaps
were being used only in important cases. On the other hand, a
limit on the number of simultaneous wiretaps possible is also
a potential disadvantage from the standpoint of the law
enforcement official, who may not wish to make resourcedriven
choices about how and whom to prosecute or investigate. Making
encryption keys directly available to law enforcement
authorities allows them to conduct wiretaps unconstrained by
financial and personnel limitations.

   A centralized decryption facility would also present
problems of its own. For example, many people would regard it
as more threatening to give a centralized entity the
capability to acquire and decrypt all traffic than to have
such capabilities distributed among local law enforcement
agencies. In addition, centralizing all wiretaps and getting
the communications out into the field in real time could
require a complex infrastructure. The failure of a centralized
facility would have more far-reaching effects than a local
failure, crippling a much larger number of wiretaps at once.

----------

   (38) The committee suspects that the likelihood of abusive
exercise of wiretap authority is greater for parties that are
farther removed from higher levels of government, although the
consequences may well be more severe when parties closer to
the top levels of government are involved. A single "bad
apple" near the top of government can set a corrupt and
abusive tone for an entire government, but at least "bad
apples" tend to be politically accountable. By contrast, the
number of parties tends to increase as those parties are
farther and farther removed from the top, and the likelihood
that at least some of these parties will be abusive seems
higher. (Put differently, the committee believes that
state/local authorities are more likely to be abusive in their
exercise of wiretapping authority simply because they do the
majority of the wiretaps. Note that while Title III calls for
a report to be filed on every federal and state wiretap order,
the majority of missing reports are mostly from state wiretap
orders rather than federal orders. (See Administrative Office
of the United States Courts, *Wiretap Report*, AOUSC,
Washington, D.C., April 1995, Table 2.)

____________________________________________________________


                     7.3 LOOMING ISSUES


   Two looming issues have direct significance for national
cryptography policy: determining the level of encryption
needed to protect against high-quality attacks, and organizing
the U.S. government for a society that will need better
information security. Appendix M describes two other issues
that relate but are not central to the current debate over
cryptography policy: digital cash and the use of cryptography
to protect intellectual property.


           7.3.1 The Adequacy of Various Levels of
           Encryption Against High-Quality Attack

   What level of encryption strength is needed to protect
information against highquality attack? For purposes of
analysis, this discussion considers only perfect
implementations of cryptography for confidentiality (i.e.,
implementations without hidden "trap doors," installed on
secure operating systems, and so on). Thus, the only issue of
significance for this discussion is the size of the key and
the algorithm used to encrypt the original plaintext.

   Any cryptanalysis problem can be solved by brute force
given enough computers and time; the question is whether it is
possible to assemble enough computational resources to allow
a brute-force cryptanalysis on a time scale and cost
reasonable for practical purposes.

   As noted in Chapter 4, a message encoded with a 40-bit RC4
algorithm was recently broken in 8 days by a brute-force
search through the use of a single workstation optimized for
speed in graphics processing.

   Even so, such a key size is adequate for many purposes
(e.g., credit card purchases). It is also sufficient to deny
access to parties with few technical skills, or to those with
access to limited computing resources. But if the data being
protected is valuable (e.g., if it refers to critical
proprietary information), 40-bit keys are inadequate from an
information security perspective. The reason is that for
logistical and administrative reasons, it does not make sense
to require a user to decide what information is or is not
critical -- the simplest approach is to protect both critical
and noncritical information alike at the level required for
protecting critical information. If this approach is adopted,
the user does not run the risk of inadequately protecting
sensitive information. Furthermore, the compromise of a single
piece of information can be catastrophic, and since it is
generally impossible to know if a particular piece of
information has been compromised, those with a high degree of
concem for the confidentiality of information must be
concerned about protecting all information at a level higher
than the thresholds offered by the 8-day cryptanalysis time
described above.

   From an interceptor's point of view, the cryptanalysis
times provided by such demonstrations are quite daunting,
because they refer to the time needed to cryptanalyze a single
message. A specific encrypted message cryptanalyzed in this
time may be useful when it is known with high probability to
be useful; however, such times are highly burdensome when many
messages must be collected and processed to yield one useful
message. An eavesdropper could well have considerable
difficulty infinding the ciphertext corresponding to critical
information, but the information security manager cannot take
the chance that a critical piece of information might be
compromised anyway.(39)

   A larger key size increases the difficulty of a brute-force
search. For symmetric algorithms, a 56-bit key entails a work
factor that is 2^16 (65,536) times larger than that of a
40-bit key, and implies a search time of about 1,430 years to
accomplish (assuming that the algorithm using that key would
take about the same time to execute as the RC4 algorithm).
Using more computers could decrease the time proportionally.
(A discussion of key lengths for asymmetric algorithms is
contained in Chapter 2.)

   Large speed-up factors for search time would be possible
through the use of specialpurpose hardware, which can be
optimized to perform specific tasks. Estimates have been made
regarding the amount of money and time needed to conduct an
exhaustive key search against a message encrypted using the
DES algorithm. Recent work by Weiner in 1993,(40) Dally in
1994,(41) and Diffie et al. in 1996(42) suggest the
feasibility of using special-purpose processors costing a few
million dollars working in parallel or in a distributed
fashion to enable a brute-force solution of a single 56-bit
DES cipher on a time scale of hours. When the costs of design,
operation, and maintenance are included (and these costs are
generally much larger than the cost of the hardware itself),
the economic burden of building and using such a machine would
be significant for most individuals and organizations.
Criminal organizations would have to support an infrastructure
for cracking DES through brute-force search clandestinely, to
avoid being targeted and infiltrated by law enforcement
officials. As a result, developing and sustaining such an
infrastructure would be even more difficult for criminals
attempting to take that approach.

   Such estimates suggest that brute-force attack against
56-bit algorithms such as DES would require the significant
effort of a well-funded adversary with access to considerable
resources. Such attacks would be far more likely from foreign
intelligence services or organized criminal cartels with
access to considerable resources and expertise, for whom the
plaintext information sought would have considerable value,
than from the casual snoop or hacker who is merely curious or
nosy.

   Thus, for routine information of relatively low or moderate
sensitivity or value, 56-bit protection probably suffices at
this time. But for information of high value, especially
information that would be valuable to foreign intelligence
services or major competitors, the adequacy in a decade of
56-bit encryption against a determined and rich attacker is
open to question.

----------

   (39)  In general, information security managers must
develop a model of the threat and respond to that threat,
rather than simply assuming the worst (for which the only
possible response would be to do "everything"). However, in
the case of encryption and in the absence of governmental
controls on technology, strong encryption costs about the same
as weak encryption. Under such circumstances, it makes no
sense at all for the information security manager to choose
weak encryption.

   (40) M.J. Wiener, "Efficient DES Key Search," TR-244, May
1994, School of Computer Science, Carleton University, OKawa,
Canada; presented at the Rump Session of Crypto '93.

   (41) William P. Dally, Professor of Electrical Engineering,
Massachusetts Institute of Technology, private communication
to the committee, September 1995.

   (42) Matt Blaze, Whitfield Diffie, Ronald L. Rivest, Bruce
Schneier, Tsutomu Shimomura, Eric Thompson, and Michael
Wiener, "Minimal Key Lengths for Symmetric Ciphers to Provide
Adequate Commercial Security: A Report by an Ad Hoc Group of
Cryptographers and Computer Scientists," January 1996.
Available from http://www.bsa.org.

____________________________________________________________


            7.3.2 Organizing the U.S. Government
     for Better Information Security on a National Basis

   As noted in Chapter 6, no organization or entity within the
federal government has the responsibility for promoting
information security in the private sector or for coordinating
information security efforts between government and
nongovermnent parties. NIST is responsible for setting Federal
Information Processing Standards, and from time to time the
private sector adopts these standards, but NIST has authority
for information security only in unclassified government
information systems. Given the growing importance of the
private nongovernment sector technologically and the
dependence of government on the private information
infrastructure, security practices of the private information
infrastructure may have a profound effect on government
activities, both civilian and military.

   How can coordination be pursued? Coherent policy regarding
information assurance, information security, and the operation
of the information infrastructure itself is needed. Business
interests and the private sector need to be represented at the
policymaking table, and a forum for resolving policy issues is
needed. And, since the details of implementation are often
critical to the success of any given policy, policy
implementation and policy formulation must go hand in hand.

   Information security functions that may call for
coordinated national action vary in scale from large to small:

   +    Assisting individual companies in key commercial
sectors at their own request to secure their corporate
information infrastructures by providing advice, techniques,
and analysis that can be adopted at the judgment and
discretion of the company involved. In some key sectors (e.g.,
banking and telecommunications), conduits and connections for
such assistance already exist as the result of government
regulation of firms in those sectors. At present, the U.S.
government will provide advice regarding information security
threats, vulnerabilities, and solutions only to government
contractors (and federal agencies).(43)

   +    Educating users both inside and outside government
about various aspects of better information security. For
example, many product vendors and potential users are unaware
of the fact that there are no legal barriers to the use of
cryptography domestically. Outreach efforts could also help in
publicizing the information security threat.

   +    Certifying appropriate entities that perform some
cryptographic service. For example, a public-key
infrastructure for authentication requires trusted
certification authorities (Appendix H). Validating the bona
fides of these authorities (e.g., through a licensing
procedure) will be an essential aspect of such an
infrastructure. In the event that private escrow agents become
part of an infrastructure for the wide use of cryptography,
such agents will need to be approved or certified to give the
public confidence in using them.

   +    Setting de jure standards for information security. As
noted above, the NIST charter prevents it from giving much
weight to commercial or private sector needs in the
formulation of Federal Information Processing Standards if
those needs conflict with those of the federal government,
even when such standards affect practice in the private
sector. Standards of technology and of practice that guide the
private sector should be based on private sector needs, both
to promote "best practices" for information security and to
provide a legitimate defense in liability cases involving
breaches of information security.

   How such functions should be implemented is another major
question. The committee does not wish to suggest that the
creation of a new organization is the only possible mechanism
for performing these functions; some existing organization or
entity could well be retooled to service these purposes. But
it is clear that whatever entity assumes these functions must
be highly insulated from political pressure (arguing for a
high degree of independence from the executive branch),
broadly representative (arguing for the involvement of
individuals that have genuine policy-making authority drawn
from a broad range of constituencies, not just government),
and fully capable of hearing and evaluating classified
arguments if necessary (arguing the need for security
clearances).(44)

   One proposal that has been discussed for assuming these
responsibilities is based on the Federal Reserve Board. The
Federal Reserve Board oversees the Federal Reserve System
(FRS), the nation's central bank. The FRS is responsible for
setting monetary policy (e.g., setting the discount rate), the
supervision of banking organizations and open market
operations, and providing services to financial institutions.
The Board of Governors is the FRS's central coordinating body.
Its seven members are appointed by the President of the United
States and confirmed by the Senate for 14-year terms. These
terms are staggered to insulate the governors from day-to-day
political pressure. Its primary function is the formulation of
monetary policy, but the board of governors also has
supervisory and regulatory responsibilities over the
activities of banking organizations and the Federal Reserve
Banks.

   A second proposal has been made by the Cross-Industry
Working Team (XIWT) of the Corporation for National Research
Initiatives for the U.S. government to establish a new Joint
Security Technology Policy Board as an independent agency of
the government.(45) Under this proposal, the board would be an
authoritative agency and coordination body officially
chartered by statute or executive order "responsible and
answerable" for federal performance across all of its
agencies, and for promotion of secure information technology
environments for the public. In addition, the board would
solicit input, analysis, and recommendations about security
technology policy concerns from private sector groups and
government agencies, represent these groups and agencies
within the board, disseminate requests and inquiries and
information back to these groups and agencies, review draft
legislation in cognizant areas and make recommendations about
the legislation, and represent the U.S. government in
international forums and other activities in the domain of
international security technology policy. The board would be
chaired by the Vice President of the United States and would
include an equal number of members appointed from the private
sector and the federal government.

   A third proposal, perhaps more in keeping with the
objective of minimal government, could be to utilize existing
agencies and organizational structures. The key element of the
proposal would be to create an explicit function in the
government, that of domestic information security. Because
information policy intersects with the interests and
responsibilities of several agencies and cabinet departments,
the policy role should arguably reside in the Executive Office
of the President. Placing the policy function there would also
give it the importance and visibility it requires. It might
also be desirable to give specific responsibility for the
initiation and coordination of policy to a Counselor to the
President for Domestic Information Security (DIS). This
individual could chair an interagency committee consisting of
agencies and departments with a direct interest in and
responsibilities for information security matters, including
the operating agency, economic policy agencies (Departments of
Treasury and Commerce), law enforcement agencies (FBI, DEA,
ATF), and international affairs and intelligence agencies
(Departments of State and Defense, CIA).

   Operationally, a single agency could have responsibility
for standards setting, certification of escrow agents,
approval of certificate holders for authentication purposes,
public education on information security, definition of"best
practices," management of cryptography on the Commerce Control
List, and so on. The operating agency could be one with an
economic policy orientation, such as the Department of
Commerce. An alternative point of responsibility might be the
Treasury Department, although its law enforcement
responsibilities could detract from the objective of raising
the economic policy profile of the information security
function.

   The public advisory committee, which is an essential
element of this structure, could be made up of representatives
of the computing, telecommunications, and banking industries,
as well as "public" members from academia, law, and so on.
This committee could be organized along the lines of the
President's Foreign Intelligence Advisory Board and could
report to the Counselor for DIS.

----------

   (43) This responsibility belongs to the NSA, as specified
in the NSA-NIST Memorandum of Understanding of March 24, 1989.
Reprinted in Office of Technology Assessment, *Information
Security and Privacy in Network Environment*, OTA, Washington,
D.C., September 1994.

   (44) As noted in the preface to this report, the committee
concluded that the broad outlines of national cryptography
policy can be argued on an unclassified basis. Nevertheless,
it is a reality of decision making in the U.S. government on
these matters that classified information may nevertheless be
invoked in such discussions and uncleared participants asked
to leave the room. To preclude this possibility, participating
members should have the clearances necessary to engage as full
participants in order to promote an effective interchange of
views and perspectives.

   (45) Cross-Industry Working Team, *A Process for
Information Security Technology: An XIWT Report on
Industry-Government Cooperation for Effective Public Policy*,
March 1995. Available from Corporation for National Research
Initiatives, Reston, Virginia, or from
http://www.cnri.reston.va.us.

____________________________________________________________


                          7.4 RECAP


   This chapter describes a number of possible policy options
but does not attempt to pull together how these options might
fit together in a coherent policy framework. This is the
function of Chapter 8.


____________________________________________________________

TABLE 7.1 Mechanisms of Export Management


Type: Total Embargo
Description: All or most exports of cryptography to target
country prohibited (this would be more restrictive than
today's regime). Hypothetical example: no products with
encryption capabilities can be exported to Vietnam, Libya,
Iraq, Iran.

When Appropriate: Appropriate during wartime or other acute
national emergency or when imposed pursuant to United Nations
or other broad international effort.


Type: Selective export prohibitions

Description: Certain products with encryption capabilities
barred for export to target country. Hypothetical example:
nothing cryptographically stronger than 40-bit RC4 can be
exported to South Korea, Taiwan.

When Appropriate: Appropriate when supplier countries agree on
items for denial and cooperate on restrictions.


Type: Selective activity prohibitions

Description: Exports of cryptography for use in particular
activities in target country prohibited. Hypothetical example:
PGP allowed for export to pro-democracy groups in People's
Republic of China but not for government use.

When Appropriate: Appropriate when supplier countries identify
proscribed operations and agree to cooperate on restrictions.


Type: Transactional licensing

Description: Products with encryption capabilities require
government agency licensing for export to a particular country
or country group. Hypothetical example: State Department
individual validated license for a DES encryption product.
Licensing actions may be conditioned on enduse verification or
postexport verification.

When Appropriate: Appropriate when product is inherently
sensitive for export to any destination, or when items have
both acceptable and undesired potential applications. Also
requires an effective multilateral control regime.


Type: Bulk licensing

Description: Exporter obtains government authority to export
categories of products with encryption capabilities to
particular consignees for a specified time period.
Hypothetical examples: Commerce Department distribution
license, ITAR foreign manufacturing license. Note that
categories can be determined with considerable freedom.
Enforcement may rely on after-the-fact audits.

When Appropriate: Same as preceding circumstances, but when
specific transaction facts are not critical to effective
export control.


Type: Preexport notification

Description:Exporter must prenotify shipment; government
agency may prohibit, impose conditions, or exercise
persuasion. Hypothetical example: requirement imposed on
vendors of products with encryption capabilities to notify the
U.S. government prior to shipping product overseas.

When Appropriate: Generally regarded as an inappropriate
export control measure because exporter cannot accept
last-minute uncertainty.


Type: Conditions on general authority or right to export

Description: Exporter not required to obtain government agency
license but must meet regulatory conditions that preclude
high-risk exports. (In general, 40-bit RC2/RC4 encryption
falls into this category once the Commodity Jurisdiction
procedure has determined that a particular product with
encryption capabilities may be governed by the CCL.
Hypothetical example: Commerce Department general licenses.

When Appropriate: Appropriate when risk of diversion or
undesired use is low.


Type: Postexport record keeping

Description: While no license may be necessary, exporter must
keep recordkeeping  records of particulars of exports for
specified period and submit or make available to government
agency. Hypothetical example: vendor is required to keep
records of foreign sales of 40-bit RC2/RC4 encryption products
under a Shippers Export Declaration.

When Appropriate: Appropriate when it is possible to monitor
exports of weak cryptography for possible diversion.

__________

SOURCE: Adapted from National Research Council, Finding Common
Ground: U.S. Export Controls in a Changed Global Environment,
National Academy Press, Washington, D.C., 1990, p. 109.

____________________________________________________________


BOX 7.1 Possible Examples of Weak Encryption Defaults

   +    The product does not specify a minimum password
length. Many users will generate short, and thus poor or weak,
passwords.

   +    The product does not perform link encryption
automatically. The user on either side of the communication
link must select an option explicitly to encrypt the
communications before encryption happens.

   +    The product requires user key generation rather than
simple passwords and retains a user key or generates a record
of one. Users might well accidentally compromise it and make
it available, even if they had the option to delete it.

   +    The product generates a key and instructs the user to
register it.

   +    E-mail encryption is not automatic. The sender must
explicitly select an encryption option to encrypt messages.

____________________________________________________________

BOX 7.2 The Microsoft CryptoAPI

   In June 1995, Microsoft received commodity jurisdiction
(CJ) to the Commerce Control List (CCL) for Windows NT with
CryptoAPI (a Microsoft trademark) plus a "base" crypto-module
that qualifies for CCL jurisdiction under present regulations
(i.e., it uses a 40-bit RC4 algorithm for confidentiality); a
similar CJ application for Windows 95 is pending. The "base"
crypto-module can be supplemented by a crypto-module provided
by some other vendor of cryptography, but the cryptographic
applications programming interface within the operating system
will function only with crypto-modules that have been
digitally signed by Microsoft, which will provide a digital
signature for a crypto-module only if the crypto-module vendor
certifies that it (the module vendor) will comply with all
relevant U.S. export control regulations. (In the case of a
crypto-module for sale in the United States only, Microsoft
will provide a digital signature upon the module vendor's
statement to that effect.)

   Responsibility for complying with export control
regulations on cryptography is as follows:

   +    Windows NT (and Windows 95, should the pending
application be successful) qualify for CCL jurisdiction on the
basis of a State Department export licensing decision.

   +    Individual crypto-modules are subject to a
case-by-case licensing analysis, and the cryptography vendor
is responsible for compliance.

   +    Applications that use Windows NT or Windows 95 for
cryptographic services should not be subject to export control
regulations on cryptography. At the time of this writing,
Microsoft is seeking an advisory opinion to this effect so
that applications vendors do not need to submit a request for
a CJ cryptography licensing decision.

____________________________________________________________

BOX 7.3 Bobby Inman on the Classification of Cryptologic
Research

   In 1982, then-Deputy Director of the Central Intelligence
Agency Bobby R. Inman wrote that

   [a] ... source of tension arises when scientists,
   completely separate from the federal government, conduct
   research in areas where the federal government has an
   obvious and preeminent role for society as a whole. One
   example is the design of advanced weapons, especially
   nuclear ones. Another is cryptography. While nuclear
   weapons and cryptography are heavily dependent on
   theoretical mathematics, there is no public business market
   for nuclear weapons. Such a market, however, does exist for
   cryptographic concepts and gear to protect certain types of
   business communications.

   [However], ... cryptologic research in the business and
   academic arenas, no matter how useful, remains redundant to
   the necessary efforts of the federal government to protect
   its own communications. I still am concerned that
   indiscriminate publication of the results of that research
   will come to the attention of foreign governments and
   entities and, thereby, could cause irreversible and
   unnecessary harm to U.S. national security interests....
   [While] key features of science -- unfettered research, and
   the publication of the results for validation by others and
   for use by all mankind -- are essential to the growth and
   development of science, ... nowhere in the scientific ethos
   is there any requirement that restrictions cannot or should
   not, when necessary, be placed on science. Scientists do
   not immunize themselves from social responsibility simply
   because they are engaged in a scientific pursuit. Society
   has recognized over time that certain kinds of scientific
   inquiry can endanger society as a whole and has applied
   either directly, or through scientific/ethical constraints,
   restrictions on the kind and amount of research that can be
   done in those areas.

For the original text of Inman's article, see "Classifying
Science: A Government Proposal ... ," *Aviation Week and Space
Technology*, February 8, 1982.

____________________________________________________________

BOX 7.4 Link vs. End-to-End Encryption of Communications

   End-to-end encryption involves a stream of data traffic (in
one or both directions) that is encrypted by the end users
involved before it is fed into the communications link;
traffic in between the end users is never seen in plaintext,
and the traffic is decrypted only upon receipt by an end user.
Link encryption is encryption performed on data traffic after
it leaves one of the end users; the traffic enters one end of
the link, is encrypted and transmitted, and then is decrypted
upon exit from that link.


   TABLE Comparison of End-to-End and Link Encryption
____________________________________________________________

                   End-to-End
                   Encryption           Link Encryption 
____________________________________________________________

Controlling        User                 Link provider
party

Suitable           Most suitable        Facilities bulk
traffic            for encryption       encryption of data
                   of individual
                   messages

Potential          Only at              At either end of
leaks of           transmitting         the link, which
plaintext          and receiving        may or may not
                   stationsbe           within the
                                        user's security
                                        perimeter

Point of           User must take       Link provider
responsibility     responsibility       takes responsibility

____________________________________________________________

BOX 7.5 Two Primary Rate and Service Models for
Telecommunications Today


Regulated Common Carrier Telephony Services

   Regulated common carrier telephony services are usually
associated with voice telephony, including fax and low-speed
modem data communications. If a "common carrier" provision
applies to a given service provider, the provider must provide
service to anyone who asks at a rate that is determined by a
public utilities commission. Common carriers often own their
own transport facilities (e.g., fiber-optic cables, telephone
wires, and so on), and thus the service provider exerts
considerable control over the routing of a particular
communication. Pricing of service for the end user is often
determined on the basis of actual usage. The carrier also
provides value-added services (e.g., call waiting) to enhance
the value of the basic service to the customer.
Administratively, the carrier is usually highly centralized.


Bulk Data Transport

   Bulk services are usually associated with data transport
(e.g., data sent from one computer to another) or with
"private" telephony (e.g., a privately owned or operated
branch exchange for telephone service within a company).
Pricing for bulk services is usually a matter of negotiation
between provider and customer and may be based on statistical
usage, actual usage, reliability of transport, regional
coverage, or other considerations. Policy for use is set by
the party that pays for the bulk service, and thus, taken over
the multitude of organizations that use bulk services, is
administratively decentralized. In general, the customer
provides value-added services. Routing paths are often not
known in advance, but instead may be determined dynamically.

____________________________________________________________

[End Chapter 7]








