Copy Link
Add to Bookmark
Report

Secure FileSystem 6

Numega's profile picture
Published in 
Secure FileSystem
 · 1 year ago

Politics

Many governments throughout the world have an unofficial policy on cryptography which is to reserve all knowledge and use of encryption to the government in general and the elite in particular. This means that encryption is to be used firstly (in the form of restrictions on its use) for intelligence-gathering, and secondly for protecting the secret communications of the government. The government therefore uses encryption to protect its own dealings, but denies its citizens the right to use it to protect their own privacy, and denies companies the right to use it to protect their business data. Only a very small number of countries have laws guaranteeing citizens' rights to use encryption[1]. Some countries have no explicit laws covering encryption, but have governments which support the use of encryption by their citizens[2].

This policy is enforced in many ways. In the US it is mainly through the use of the ITAR, the International Traffic in Arms Regulations, a law which was passed without public debate during the second world war. This defines all encryption material (hardware and software) as "munitions", subject to special governmental control[3]. France also classifies encryption devices as munitions[4]. These "munitions" in fact have no violent use beyond perhaps beating someone to death with a box of disks. Their only possible use is to protect personal privacy and the privacy of business data.

In limiting the use (and export) of encryption technology[5], the US (and many other countries which follow the lead of the US) are not only denying their citizens the means to ensure the privacy of personal information stored on computer, they are also placing businesses at risk. With no easy way to protect their data, companies are losing billions of dollars a year to industrial espionage which even a simple measure like SFS would help to reduce. Some real-life examples of what the lack of secure encryption can do are:

- The head of the French DGSE (Direction Generale de la Securite Exterieure) secret service has publicly boasted that his organisation, through industrial espionage, helped French companies acquire over a billion dollars worth of business deals from foreign competitors [6][7].

- In a talk given to the Executives' Club of Chicago on 17 March 1994 by FBI Director Louis Freeh, he stated that:

"a nation's power is increasingly measured by economic prosperity at home and competitiveness abroad. And in some ways, the United States is a sitting duck for countries and individuals who want to take a short cut to power"

[At least 20 nations are] "actively engaged in economic espionage"

"This kind of information [cost and price structure, research and development results, marketing plans, bids and customer lists] can be intercepted from fax and satellite communications. It can be monitored from cellular and microwave telephone links. It can be retrieved from inadequately protected computer systems".

- A laptop stolen from Wing Commander David Farquhar on December 17th 1990 supposedly contained extremely sensitive information related to British military operations against Iraq during the Gulf War. This information was known and published outside the UK, but was not published in the UK due to censorship until The Irish Times ran a story on it in the January 3rd 1991 issue. More information was finally disclosed on January 9th 1991 when Associated Press threatened to run the story anyway. The laptop was supposed to contain highly sensitive military plans, but no steps had been taken to protect the information.

- Volume 10, Issue 77 of the RISKS-Forum Digest, published on 11 January 1991, reported that thieves in the Soviet Union had stolen a computer used to handle Russian internet traffic. Although the target of the theft was the computer hardware, the thieves also acquired all the private mail held on the machine as plain text files.

- The book "Friendly Spies" by Peter Schweitzer, published by Atlantic Monthly Press, gives many accounts about covert intelligence operations directed against US corporations by cold war allies, with foreign governments conspiring with foreign companies to steal US technology and economic secrets. In 1988, the University of Illinois published a document titled "A Study of Trade Secrets in High Technology Industries" which revealed that 48% of all companies surveyed admitted to being the victims of some form of inustrial espionage.

- In a talk given to the National Press Club on 3 April 1990 by Senator David Boren, he warns that:

"An increasing share of the espionage directed against the United States comes from spying by foreign governments against private American companies aimed at stealing commercial secrets to gain a national competitive advantage".

- Volume 14, Issue 34 of the RISKS-Forum Digest, published on 22 February 1993, reports that "France and Germany and many other countries require US companies to `register' their encryption keys for reasons of national security. All of the American transmissions are monitored and the data is passed on to the local competitors. Companies like IBM finally began to routinely transmit false information to their French subsidiary just to thwart the French Secret Service and by transitive property of economic nationalism, French computer companies".

- Markt und Technik, 18/94, page 49, in a discussion of US monitoring stations in Germany, reports that "powerful computers scan telephone, fax, and computer data traffic for information from certain sources, to certain destinations, or containing certain keywords, and store any interesting communications for later analysys. The fear that such monitoring stations will, after the end of the cold war, be used for industrial espionage, has been expressed by DP managers and tacitly confirmed by US security agencies".

- Directorate T (for "technology") of the KGB has three main aims, two of which relate to the acquisition of military intelligence on enemies of the (former) USSR, and the third of which is "Acceleration of Soviet scientific and technical progress by acquiring information and samples of equipment"[8]. Since the collapse of the Soviet Union, the use of the KGB for industrial espionage has increased dramatically. A great many other countries now use the idle capacity of their intelligence agencies for industrial espionage purposes.

- A US company was being consistently underbid by a Japanese competitor which was intercepting their electronic traffic. When they started encrypting their messages, the underbidding stopped. A few days later they were requested by the US government to stop using encryption in their traffic[9].

- A New Zealand computer dealer acquired 90 used disks which turned out to contain sensitive financial records from a large bank. The evening after he turned them over to the bank (for a $15,000 cash "finders fee") he was killed in a road accident. The finders fee was never recovered[10].

Despite this major security problem, the bank wouldn't learn from their mistakes. A few weeks later a large-capacity disk drive from a network server which was originally used by them was supplied to another company as a supposedly "new" replacement for a drive which had died. This drive was found to contain the complete financial records from one of their branches. Not wanting to be scraped off the side of the road one night, the system manager decided to erase the contents of the drive[11].

It isn't known how many more of their confidential financial records this bank has handed out to the public over the years.

- National Security Decision Directive 42, signed by President Bush in 1990, notes that "telecommunications and information processing systems are highly susceptible to interception, unauthorized access, and related forms of technical exploitation as well as other dimensions of the foreign intelligence threat. The technology to exploit these electronic systems is widespread and is used extensively by foreign nations and can be employed as well by terrorist groups and criminal elements"[12].

- A customer of an electrical goods retail store in the UK returned a defective PC to the store, asking them to erase the contents of the hard drive which contained confidential information. Six months later he received a phonecall from someone who had purchased his old computer from the store, complete with the confidential information he had wanted erased - like the previous case, the customer had been sold a supposedly new machine which still held data left there by the previous owner[13].

- The New Zealand Securities Commission accidentally left a number of sensitive files on the hard drive of one of a group of machines which was later sold at auction for $100. These files were stored without any security measures, and related to Securities Commission and Serious Fraud Office investigations. They were recovered about a week later, but not before the press had got to them[14].

- In a report on the front page of "Infotech Weekly", Issue 157, 15 August 1994, a systems analyst says he knows of dozens of cases of confidential information being given away when old equipment is disposed of, including "three alone during the past six months when commercially-sensitive or just plain embarassing material was passed on inadvertently". A particularly widespread problem seems to be the improper disposal of backup disks full of confidential records[15].

- The owner of a notebook computer which had stopped working asked a visiting computer instructor to take a look at it. The instructor determined that he could fix it with the Norton Utilities, which he had back at his office. The companies network administrator, who also had a copy of the Norton Utilities, offered to fix it on the spot to save taking the computer offsite. The notebook's hard drive contained the network administrators dismissal papers. Luckily the instructor was able to retrieve the notebook before any damage was done[16].

- The book "By Way of Deception" by Victor Ostrovsky and Claire Hoy, published by St. Martins Press, New York, in 1990 (ISBN 0-312-05613-3), reports in Appendix I that Mossad's computer services routinely monitor VISA, AMEX, and Diner's Club transactions, as well as police computer traffic.

- An international banker travelled to Latin America with information on various contacts there stored on his laptop. His hotel room was broken into and the computer stolen, along with all the client information. Luckily the thieves were after the hardware rather than the software, as the information stored in the machine could have caused serious problems for the people involved[17].

In the case of the bank selling its financial records the lack of encryption not only had the potential to cause serious financial harm to the bank involved but resulted in the death of the main player. The use of a program like SFS would have made the contents of the disks unreadable to anyone but the bank.

In 1991 the US Justice Department tried to introduce legislation that would require all US digital communications systems to be reengineered (at enormous cost) to support monitoring of message traffic by the FBI. This measure was never passed into law. The next year the FBI tried to introduce a similar measure, but could find no-one willing to back the bill. In 1993, yet another attmempt was made, which is currently being fought by an unusual coalition of civil libertarians, computer users and companies, and communications providers. A poll carried out by Time/CNN in March 1994 indicated that 2/3 of Americans were opposed to the legislation[18]. It was finally passed on 7 October 1994 as the Digital Telephony Act[19].

In April 1993, the US government announced a piece of hardware called the Clipper Chip. They proposed that this device, whose details are classified and which contains self-destruct mechanisms which are activated if attempts are made to examine it too closely, be built into all telephones. Each chip has a special serial number which identifies all communications carried out with that phone. At the beginning of each transmission, telephones equipped with a Clipper Chip negotiate a connection which involves sending identifying information across the phone network, and setting up a common key to use for encrypting the conversation.

Built into this setup is a special back door which allows the government, and anyone who works for the government, and anyone who has a friend who works for the government, and anyone with enough money or force to bribe or coerce the aforementioned people, to monitor the conversation[20]. The job is made much easier by the extra identification information which the Clipper Chip attaches to the data stream. The Clipper Chip allows monitoring on a scale even George Orwell couldn't have imagined when he wrote his novel "1984"[21 US government prefers to call the mechanism used by Clipper a key escrow system, it is in fact a key forfeiture system in that all hope of privacy is forfeited to the government. The Time/CNN poll mentioned above found that 80% of Americans were opposed to the Clipper Chip[18][22].

The Clipper Chip is also susceptible to an attack in which a fake Clipper-encrypted message can be created without needing to know the encryption keys, allowing either false evidence to be created, or allowing genuine evidence to be discounted on the basis that it might have been faked[23].

A somewhat less blatant attempt to bypass communications privacy is gradually appearing in the rest of the world. The GSM digital telephone system uses a special encryption algorithm called A5X which is a modified form of a stronger system called A5. A5X exists solely as a deliberately crippled A5, and is relatively easy to bypass for electronic monitoring purposes. Although the details of A5 are classified "for national security purposes"[24], various sources have commented that even the original unmodified A5 probably provides only limited security against a determined attack, and the actual implementation exhibits some fundamental flaws (such as a 3-hour key rollover) which can greatly aid an attacker[25][26].

It is against this worrying background that SFS was created. Right from the start, the idea behind SFS was to provide the strongest possible cryptographic security. No compromises were made, there are no back doors or weaknesses designed into the system, nor will there ever be the deliberate crippling of the system or undermining of its integrity which some organizations would like. The algorithms and methods used in SFS have been selected specifically for their acknowledged strength and general acceptance by the worldwide cryptographic community, and conform to a wide variety of national and international standards for secure encryption. As new algorithms and cryptographic processes appear, SFS will be updated to always provide the best possible security available.

Footnote [1]: One of these is Japan. Article 21 of the Japanese Consitution states: "Freedom of assembly and association as well as speech, press, and all other forms of expression are guaranteed. No censorship shall be maintained, nor shall the secrecy of any means of communication be violated".

Footnote [2]: The only known UK government statement on the use of encryption is in the record of parliamentary debates (Hansard) for Thursday 21st April 1994 and consists of a written answer to a question posed by an MP:

Mr. Llew Smith: To ask the Secretary of State for the Home Department what is Her Majesty's Government's policy towards the use of private encryption technology by individuals or companies to protect data sent by electronic mail.

Mr. McLoughlin: I have been asked to reply. It is a matter for the individual, or company, to take measures, which may include private encryption technology, which they consider necessary to safeguard their electronic mail communication.

Footnote [3]: The ITAR, 22 CFR commencing with section 120, has provisions in section 121.1, under Category XII (b) that "cryptographic devices and software" are considered to be defense articles subject to the State Department's Office of Defense Trade Controls authority with respect to export.

Footnote [4]: The "decret du 18 avril 1939" defines 8 categories of arms and munitions from the most dangerous (1st category) to the least dangerous (8th category). The "decret 73-364 du 12 mars 1973" specifies that encryption equipment belongs to the second category. Any usage of such equipment requires authorization from the Prime Minister. The "decret 86-250 du 18 fev 1986" extends the definition of encryption equipment to include software. It specifies that each request for authorization for business or private usage of the equipment must be sent to the Minister of Telecommunications. The request must include a complete and detailed description of the "cryptologic process", and if this is materially possible, of two copies of the envisaged equipment (see also Footnote 6). The "loi 90-1170 du 29 decembre 1990" states that export or use of encryption equipment must be previously declared when used only for authentication, and previously authorized by the Prime Minister in all other cases, with penalties of fines of up to 500 000F and three months in jail. Import of encryption equipment (but not encrypted data) is prohibited by the "decret du 18 avril 1939", article 11. However the "loi du 29 dec 1990" only restricts use or export, not import, of encryption equipment. There are no restrictions on the import of encrypted data. Information on the (questionable) need to license encryption is available via FTP from ftp.urec.fr in the directory /pub/securite/Divers as autorisation.utilisation.crypto.

In general, these laws appear not to be enforced, with encryption software being freely imported, exported, available, and used in France.

Footnote [5]: The reasoning behind this, as stated by the Permanent Select Committee on Intelligence in its commentary of 16 June 1994 on the HR.3937 Omnibus Export Administration Act is that "the intelligence community's cryptologic success depends in part on controlling the use of encryption [...] controlling the dissemination of sophisticated encryption has been and will continue to be critical to those successes [of the US intelligence community] and US national security interests".

Footnote [6] This was reported by cryptographer Martin Hellman at the 1993 RSA Data Security conference on 14-15 January 1993.

Footnote [7]: "If someone steals from your computer, you probably won't know it. You may find yourself losing customers, or market share, or your ideas may show up in unexpected places" - Mark Rasch, a lawyer who specialises in computer crime at the Washington law firm Arendt, Fox, Kintner, Plotkin, and Kahn, quoted in "InfoTech Weekly", Issue 168, 31 October 1994, page 10.

Footnote [8]: The industrial espionage practices of Soviet intelligence agencies is discussed in more detail in "Soviet Industrial Espionage", which appeared on page 25 of the April 1987 edition of the Bulletin of the Atomic Scientists.

Footnote [9]: Private communications from one of the people involved.

Footnote [10]: This event received nationwide TV, radio, and newspaper coverage at the time. For example, any New Zealand paper published on 7 September 1992 should provide more information.

Footnote [11]: Private communications from the system manager involved.

Footnote [12]: Postions of NSDD-42 were declassified on 1 April 1992 through the efforts of Marc Rotenberg of the Computer Professionals for Social Responsibility.

Footnote [13]: This was reported on "Watchdog", a BBC consumer affairs television program screened on Monday 26th September 1994.

Footnote [14]: This event received nationwide TV, radio, and newspaper coverage at the time. Most New Zealand papers published on 13 August 1994 contain coverage of the story. Mention of the return of the files was made about a week later.

Footnote [15]: A computer trader who buys machines at auction for reconditioning and later resale says "the amount of stuff we find on the machines we handle is incredible. You don't need to do any fancy bugging of a companies offices to get at their business records, all you need to do is buy their old computers. Occasionally they'll go as far as reformatting the drives, which won't slow anyone down for more than 10 minutes".

Footnote [16]: This was discussed in the alt.folklore.computers newsgroup on 7 March 1995, message-ID <3jgv90$74q@huey.cadvision.com>

Footnote [17]: This was discussed in the alt.privacy newsgroup on 7 January 1995, message-ID <3el7ju$gt3@pipe1.pipeline.com>.

Footnote [18]: "In a Time/CNN poll of 1,000 Americans conducted last week by Yankelovich Partners, two thirds said it was more important to protect the privacy of phone calls than to preserve the ability of police to conduct wiretaps. When informed about the Clipper Chip, 80% said they opposed it" - Philip Elmer-Dewitt, "Who Should Keep the Keys", TIME, 14 March 1994.

Footnote [19]: On 7 October 1994 the US Congress passed the HR 4922/SR 2375 Digital Telephony Act, which was signed into law as the Communications Assistance for Law Enforcement Act on 25 October. This requires telecommunications carriers to modify existing equipment and design new equipment to facilitate monitoring of communications by law enforcement agencies.

Footnote [20]: In June 1994, an AT&T researcher discovered a means of bypassing this monitoring using about 28 minutes of computation time on easily-available mass-market Tessera (now renamed to Fortezza[25][26][27]) cards. By precomputing the values or employing many such devices running in parallel, the time can be reduced to virtually nothing. This attack also opened up a number of other interesting possibilities for generally bypassing many of the perceived undesirable "features" of Clipper.

Footnote [21]: It has been claimed that the Clipper proposal is an example of the government servicing the people in the sense of the term used in the sentence "The farmer got a bull to service his cows".

Footnote [22]: In December 1994, the ANSI X9 committee rejected Clipper in favour of a triple-DES based security standard, for reasons summed up in a letter by Electronic Frontiers Foundation board member John Gilmore:

"NSA's opposition to triple-DES appears to be an indirect attempt to push Clipper by eliminating credible alternatives. Clipper is not a viable alternative to triple-DES, and carries substantial liabilities. There has been no evidence of foreign acceptance of the standard and the Skipjack algorithm is classified. The likelihood of any government accepting secret standards developed by a foreign security agency is slim. Clinton Administration efforts, through the NSA, to push Clipper as a domestic standard over the past two years have failed".

Footnote [23]: In the December 1994 "Communications of the ACM", page 12, Mark Lomas and Michael Roe demonstrate an attack which takes advantage of the fact that the data to be encrypted is exclusive-ored with a pseudorandom key stream, so that by exclusive-oring the data back in the original key stream can be recovered. At this point any arbitrary data can be exclusive-ored into the key stream to create a forged encrypted message. No knowledge of the encryption key is necessary in order to perform this attack.

Footnote [24]: In June 1994, the statement that A5 was too strong to disclose was suddenly changed so that it now became too weak to disclose, and that discussing the details might harm export sales. This is an interesting contrast to the position taken in 1993 that sales to the Middle East might end up providing A5-capable equipment to the likes of Saddam Hussein. Apparently there was a major debate among the NATO signal agencies in the 1980's over whether the GSM encryption should be strong or weak, with the weak encryption side eventually winning.

Footnote [25]: It has been reported that GCHQ, the UK intelligence agency which requested the A5X changes, regards as "strong encryption" (and therefore not suitable for use by the public) anything which can't be broken in real time.

Footnote [26]: UK cryptographer Ross Anderson has charecterised A5 as being "not much good". A simple brute-force attack which searches all 2^40 key combinations will break the system in about a week on a standard PC, with much faster attacks being possible using either better algorithms, custom hardware, or both. Interestingly, the low upper limit on the number of possible keys would also seem to meet the US government requirements for weak exportable encryption.

Attacks faster than the basic brute-force one are also possible, and one such attack was to be presented by Dr Simon Shepherd at an IEE colloquium in London on 3rd June 1994. However the talk was canceled at the last minute by GCHQ.

An A5 cracking chip is currently being designed by a student for his MSc thesis.

Footnote [27]: "Violins, the music says fortezza, not forte".

Footnote [28]: "Police have confirmed that the body found floating in the East River late last night belonged to Joey "No Ears" Fortezza, a well-known family man".

Footnote [29]: "Will sir be having wine with his spaghetti al fortezza?".

An Introduction to Encryption Systems

For space reasons the following introduction to encryption systems is very brief. Anyone requiring more in-depth coverage is urged to consult the texts mentioned in the references at the end of this document.

Encryption algorithms (ciphers) are generally one of two types, block ciphers and stream ciphers. A block cipher takes a block of plaintext and converts the entire block into ciphertext. A stream cipher takes a single bit or byte of plaintext at a time and converts it into ciphertext. There also exist means of converting block ciphers to stream ciphers, and vice versa. Usually a stream cipher is preferred, as it doesn't require data to be quantised to the block size of the cipher in use. Unfortunately, stream ciphers, although more convenient, are usually harder to get right than block ciphers. Many practical stream ciphers are in fact simply block ciphers pretending to be stream ciphers.

Virtually all good conventional-key ciphers are so-called product ciphers, in which several (relatively) weak transformations such as substitution, transposition, modular addition/multiplication, and linear transformation are iterated over a piece of data, gaining more and more strength with each iteration (usually referred to as a round). These types of ciphers have been extensively studied and are reasonably well understood. The following table compares the main parameters of several product ciphers. Lucifer is the immediate precursor to the US Data Encryption Standard (DES). Loki is a proposed alternative to DES. FEAL is a fast block cipher designed in Japan. IDEA is a relatively new Swiss block cipher which has been proposed as a successor to DES and which has (so far) proven more resistant to attack then DES. MDC/SHS is a cipher based on the SHS one-way hash function (more on this later).

   +-----------+------------+----------+-----------+---------------+ 
| Cipher | Block size | Key size | Number of | Complexity of |
| | | (bits) | rounds | Best Attack |
+-----------+------------+----------+-----------+---------------+
| Lucifer | 128 | 128 | 16 | 2^21 |
| DES | 64 | 56 | 16 | 2^43 |
| Loki91 | 64 | 64 | 16 | 2^48 |
| FEAL-8 | 64 | 128 | 8 | 10,000 |
| IDEA | 64 | 128 | 8 | 2^128 |
| MDC/SHS | 160 | 512 | 80 | 2^512 |
+-----------+------------+----------+-----------+---------------+

The complexity of the best known attack is the number of operations necessary to allow the cipher to be broken. Note how the block size, key size, and number of rounds don't necessarily give a good indication of how secure the algorithm itself is. Lucifer, although it has twice the block size and over twice the key size of DES, is rather simple to break (the key size of DES is discussed a little further on in this section). DES is the result of several years of work on improvements to Lucifer. FEAL has been continually changed every year or so when the previous version was broken. Due to this, current versions are treated with some scepticism. Both IDEA and MDC have so far resisted all forms of attack, although recently a class of weak keys have been discovered in IDEA, but this problem was quickly corrected with a simple change in the key setup process. Note that in the case of the last two algorithms the given complexity is for a brute-force attack (explained below), which is the most pessimistic kind possible. There may be much better attacks available, although if anyone knows of any they're not saying anything. Of the algorithms listed above, DES has been attacked the hardest, and IDEA and MDC the least, which may go some way toward explaining the fact that brute force is the best known attack.

There are a large number of modes of operation in which these block ciphers can be used. The simplest is the electronic codebook (ECB) mode, in which the data to be encrypted is broken up into seperate subblocks which correspond to the size of the block cipher being used, and each subblock is encrypted individually. Unfortunately ECB has a number of weaknesses (some of which are outlined below), and should never be used in a well-designed cryptosystem. Using P[] to denote a plaintext block, C[] to denote a ciphertext block, e() to denote encryption, d() to denote decryption, and ^ for the exclusive-or operation, ECB mode encryption can be given as:

    C[ n ] = e( P[ n ] )

with decryption being:

    P[ n ] = d( C[ n ] )

When used to encrypt and decrypt multiple blocks of data, this looks as follows:

       + - - - - - - - - - - - - - - - - - - - - + 
+---------+ +---------+ +---------+
| | Block 1 | | Block 2 | | Block 3 | |
+---------+ +---------+ +---------+
| | | | |
v v v
| +---------+ +---------+ +---------+ |
Key ---->| Encrypt |--| Encrypt |--| Encrypt | Encryption
| +---------+ +---------+ +---------+ |
| | |
| v v v |
+---------+ +---------+ +---------+
| | | | | | | |
+---------+ +---------+ +---------+
+ - - - | - - - - - -|- - - - - - | - - - +
| | |
+ - - - v - - - - - -v- - - - - - v - - - +
+---------+ +---------+ +---------+
| | | | | | | |
+---------+ +---------+ +---------+
| | | | |
v v v
| +---------+ +---------+ +---------+ |
Key ---->| Decrypt |--| Decrypt |--| Decrypt | Decryption
| +---------+ +---------+ +---------+ |
| | |
| v v v |
+---------+ +---------+ +---------+
| | Block 1 | | Block 2 | | Block 3 | |
+---------+ +---------+ +---------+
+ - - - - - - - - - - - - - - - - - - - - +

Since each block of data is encrypted seperately, two identical blocks of plaintext will encrypt to give the same ciphertext, and encrypted blocks can be moved around or deleted without affecting the outcome of the decryption process.

A better encryption mode is cipher block chaining (CBC), in which the first data subblock is exclusive-ored with an initialization vector (IV) and then encrypted. The resulting ciphertext is exclusive-ored with the next data subblock, and the result encrypted. This process is repeated until all the data has been encrypted. Because the ciphertext form of each subblock is a function of the IV and all preceding subblocks, many of the problems inherent in the ECB encryption mode are avoided. CBC-mode encryption is:

    C[ 1 ] = e( P[ 1 ] ^ IV ) 
C[ n ] = e( P[ n ] ^ C[ n-1 ] )

and decryption is:

    P[ 1 ] = d( C[ 1 ] ) ^ IV 
P[ n ] = d( C[ n ] ) ^ C[ n-1 ]

When used to encrypt and decrypt multiple blocks of data, this looks as
follows:

       + - - - - - - - - - - - - - - - - - - - - - - - - - - -  + 
+---------+ +---------+ +---------+ +---------+
| | IV | | Block 1 | | Block 2 | | Block 3 | |
+---------+ +---------+ +---------+ +---------+
| | | | | |
+---------> XOR +---> XOR +---> XOR
| | | | | | |
v | v | v
| +---------+ | +---------+ | +---------+ |
Key ----------------->| Encrypt |-|-| Encrypt |-|-| Encrypt | Encryption
| +---------+ | +---------+ | +---------+ |
| | | | |
| v | v | v |
+---------+ | +---------+ | +---------+
| | |-+ | |-+ | | |
+---------+ +---------+ +---------+
+ - - - - - - - - - -|- - - - - - -|- - - - - - -|- - - +
| | |
+ - - - - - - - - - -v- - - - - - -v- - - - - - -v- - - +
+---------+ +---------+ +---------+ +---------+
| | IV | | |-+ | |-+ | | |
+---------+ +---------+ | +---------+ | +---------+
| | | | | | | |
| v | v | v
| | +---------+ | +---------+ | +---------+ |
Key ----------|------>| Decrypt |-|-| Decrypt |-|-| Decrypt | Decryption
| | +---------+ | +---------+ | +---------+ |
| | | | | |
| | v | v | v |
| +---------+ | +---------+ | +---------+
| | | | | | | | | | |
| +---------+ | +---------+ | +---------+
| | | | | | | |
+---------> XOR +---> XOR +---> XOR
| | | | |
| v v v |
+---------+ +---------+ +---------+
| | Block 1 | | Block 2 | | Block 3 | |
+---------+ +---------+ +---------+
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - +

Each block of data is now chained to the previous and next block, making it difficult to insert or switch blocks, and hiding the fact that different encrypted blocks may contain the same plaintext. By using a unique, random IV, otherwise identical messages can be encrypted to give different ciphertexts.

Another encryption mode is cipher feedback (CFB), in which the IV is encrypted and then exclusive-ored with the first data subblock to provide the ciphertext. The resulting ciphertext is then encrypted and exclusive-ored with the next data subblock to provide the next ciphertext block. This process is repeated until all the data has been encrypted. Because the ciphertext form of each subblock is a function of the IV and all preceding subblocks (as is also the case for CBC-mode encryption), many of the problems inherent in the ECB encryption mode are avoided. CFB-mode encryption is:

    C[ 1 ] = P[ 1 ] ^ e( IV ) 
C[ n ] = P[ n ] ^ e( C[ n-1 ] )

and decryption is:

    P[ 1 ] = C[ 1 ] ^ e( IV ) 
P[ n ] = C[ n ] ^ e( C[ n-1 ] )

This encryption mode converts a block cipher into a stream cipher in which data is processed as a stream of bits or bytes rather than a collection of blocks:

          + - - - - - - - - - - +             + - - - - - - - - - - + 
+---------------+
Initial | +---------+ | Initial | | +---------+ | |
value ------->| |<-+ value -----|->| |<-+
from IV | +---------+ | | from IV | | +---------+ |
| | | |
| v | | | | v |
+---------+ | | +---------+
Key --------->| Encrypt | | | Key -------|->| Encrypt | |
+---------+ | | +---------+
| | | | | | | |
v | | v
| +---------+ | | | | +---------+ |
| | | | | |
| +---------+ | | | | +---------+ |
| | | |
Data -----------> XOR -----+-------------------+----> XOR -----------> Data
+ - - - - - - - - - - + + - - - - - - - - - - +

Encryption Decryption

One interesting feature of this mode of encryption is that there is no need for a decrypt primitive, since both CFB encryption and CFB decryption only make use an encrypt primitive. This means that CFB encryption can be implemented with non-reversible functions such as the SHS algorithm used for MDC/SHS.

There are several other modes of operation for block ciphers which are not covered here. More details can be found in the texts given in the references.

One point worth noting is that by using a different IV for each message in CBC and CFB mode, the ciphertext will be different each time, even if the same piece of data is encrypted with the same key. This can't be done in ECB mode, and is one of its many weaknesses.

There are several standard types of attack which can be performed on a cryptosystem. The most restricted of these is a ciphertext-only attack, in which the contents of the message are unknown. This kind of attack virtually never occurs, as there is usually some regularity or known data in the message which can be exploited by an attacker.

This leads to the next kind of attack, the known-plaintext attack. In this case some (or all) of the plaintext of the message is known. This type of attack is a lot more common than it would seem, since much data consists of well-known, fixed-format messages containing standard headers, a fixed layout, or data conforming to a certain probability distribution such as that of ASCII text.

Finally, in a chosen-plaintext attack the attacker is able to select plaintext and obtain the corresponding ciphertext. This attack, which involves fooling the victim into transmitting a message or encrypting a piece of data chosen by the attacker, is somewhat more tricky to mount, but can often be performed with a little work. For example, it was used to help break the Japanese "Purple" cipher during WWII by including in a news release a certain piece of information which it was known the Japanese would encrypt and transmit to their superiors.

However attacks of this kind are usually entirely unnecessary. Too many cryptosystems in everyday use today are very easy to break, either because the algorithms themselves are weak, because the implementations are incorrect, or because the way they are used is incorrect. Often amateurs think they can design secure systems, and are not aware of what an expert cryptanalyst could do. Sometimes there is insufficient motivation for anybody to invest the work needed to produce a secure system. Many implementations contain flaws which aren't immediately obvious to a non-expert. Some of the possible problems include:

  • Use of easily-searched keyspaces. Some algorithms depend for their security on the fact that a search of all possible encryption keys (a so-called brute force attack) would take too long to be practical. Or at least, it took too long to be practical when the algorithm was designed. The Unix password encryption algorithm is a prime example of this. The DES key space is another example. Recent research has indicated that the DES was not in fact weakened by having only 56 bits of key material (as has often been claimed), since the inherent strength of the algorithm itself only provides this many bits of security (that is, that increasing the key size would have no effect since other attacks which don't involve knowing the key can be used to break the encryption in far less than 2^56 operations). The encryption used in the Pkzip archiver can usually be broken automatically in less time than it takes to type the password in for authorized access to the data since, although it allows longer keys than DES, it makes the check for valid decryption keys exceedingly easy for an attacker.
  • Use of insecure algorithms designed by amateurs. This covers the algorithms used in the majority of commercial database, spreadsheet, wordprocessing, and miscellaneous utility programs such as Ami Pro, Arc, Arj, Lotus 123, Lotus Symphony, Microsoft Excel, Microsoft Word, Novell Netware, Paradox, Pkzip, Quattro Pro, WordPerfect, and many others. These systems are so simple to break that the author of at least one package which does so added several delay loops to his code simply to make it look as if there was actually some work involved. In addition, some programs which claim to use encryption or some related form of security for protection rely instead on a simple password check which can be easily bypassed. Example of this are Fastback, Microsoft Money, Norton's partition security system, Quicken, and Stacker.
  • Use of insecure algorithms designed by experts. An example is the standard Unix crypt command, which is an implementation of a rotor machine much like the German Enigma cipher which was broken during WWII. There is a program called cbw (for `crypt breakers workbench') which can automatically decrypt data encrypted with crypt[1]. After the war, the US government even sold Enigma cipher machines to third-world governments without telling them that they knew how to break this form of encryption.
  • Use of incorrectly-implemented algorithms. Some encryption programs use the DES algorithm, which consists of a series of complicated and arbitrary- seeming bit transformations controlled by complex lookup tables. These transformations and tables are very easy to get wrong.

It is a property of the DES algorithm that even the slightest deviation from the correct implementation significantly weakens the algorithm itself. In other words any implementation which doesn't conform 100% to the standard may encrypt and decrypt data perfectly, but is in practice rather easier to break than the real thing.

The US National Bureau of Standards (now the National Institute of Standards and Technology) provides a reference standard for DES encryption. A disappointingly large number of commercial implementations fail this test.

- Use of badly-implemented algorithms. This is another problem which besets many DES implementations. DES can be used in several modes of operation, some of them better than others. The simplest of these is the Electronic Codebook (ECB) mode, in which a block of data is broken up into seperate subblocks which correspond to the unit of data which DES can encrypt or decrypt in one operation, and each subblock is then encrypted seperately.

There also exist other modes such as CBC in which one block of encrypted data is chained to the next (such that the ciphertext block n depends not only on the corresponding plaintext but also on all preceding ciphertext blocks 0..n-1), and CFB, which is a means of converting a block cipher to a stream cipher with similar chaining properties.

There are several forms of attack which can be used when an encrypted message consists of a large number of completely independant message blocks. It is often possible to identify by inspection repeated blocks of data, which may correspond to patterns like long strings of spaces in text. This can be used as the basis for a known-plaintext attack.

ECB mode is also open to so-called message modification attacks. Lets assume that Bob asks his bank to deposit $10,000 in account number 12-3456-789012-3. The bank encrypts the message `Deposit $10,000 in account number 12-3456-789012-3' and sends it to its central office. Encrypted in ECB mode this looks as follows:

    E( Deposit $10,000 in acct. number 12-3456-789012-3 )

Bob intercepts this message, and records it. The encrypted message looks as follows:

    H+2nx/GHEKgvldSbqGQHbrUfotYFtUk6gS4CpMIuH7e2MPZCe

Later on in the day, he intercepts the following a message:

    H+2nx/GHEKgvldSbqGQHbrUfotYFtUk61Pts2LtOHa8oaNWpj

Since each block of text is completely independant of any surrounding block, he can simply insert the blocks corresponding to his account number:

    ................................gS4CpMIuH7e2MPZCe

in place of the existing blocks, and thus alter the encrypted data without any knowledge of the encryption key used. Bob has since gone on to early retirement in his new Hawaiian villa.

ECB mode, and the more secure modes such as CBC and CFB are described in several standards. Some of these standards make a reference to the insecurity of ECB mode, and recommend the use of the stronger CBC or CFB modes. Usually implementors stop reading at the section on ECB, with the result being that a number of commercial packages which use DES and which do manage to get it correct end up using it in ECB mode.

- Protocol errors. Even if a more secure encryption mode such as CBC or CFB mode is used, there can still be problems. If a standard message format (such as the one shown above) is used, modification is still possible, except that now instead of changing individual parts of a message the entire message must be altered, since each piece of data is dependant on all previous parts. This can be avoided by prepending a random initialisation vector (IV) to each message, which propagates through the message itself to generate a completely different ciphertext each time the same message is encrypted. The use of the salt in Unix password encryption is an example of an IV, although the range of only 4096 values is too small to provide real security.

In some ways, cryptography is like pharmaceuticals. Its integrity is important. Bad penicillin looks just the same as good penicillin. Determining whether most software is correct or not is simple - just look at the output. However the ciphertext produced by a weak encryption algorithm looks no different from the ciphertext produced by a strong algorithm.... until an opponent starts using your supposedly secure data against you, or you find your money transfers are ending up in an account somewhere in Switzerland, or financing Hawaiian villas.

Footnote [1]: Available from ftp.ox.ac.uk in the directory /src/security as cbw.tar.Z.

Security Analysis

This section attempts to analyse some of the possible failure modes of SFS and indicate which strategies have been used to minimise problems.


Incorrect Encryption Algorithm Implementations

When implementing something as complex as most encryption algorithms, it is very simple to make a minor mistake which greatly weakens the implementation. For example, making even the smallest change to the DES algorithm reduces its strength considerably. There is a body of test data available as US National Bureau of Standards (NBS, now NIST) special publication 500-20 which can be used to validate DES implementations. Unfortunately the programmers who produce many commercial DES implementations either don't know it exists or have never bothered using it to verify their code (see the section "Other Software" above), leading to the distribution of programs which perform some sort of encryption which is probably quite close to DES but which nevertheless has none of the security of the real DES.

In order to avoid this problem, the SHS code used in SFS has a self-test feature which you can use to check that it conforms with the data given in Federal Information Processing Standards (FIPS) publication 180 and ANSI X9.30 part 2, which are the specifications for the SHS algorithm[1]. You can invoke the self-test in the mksfs program by using the `-t' test option:

    mksfs -t

The other SFS programs and the SFS driver itself use exactly the same code, so testing it in mksfs is enough to ensure correctness of the other two programs. The following tests can take a minute or so to run to completion.

The self-test, when run, produces the following output:

  Running SHS test 1 ... passed, result=0164B8A914CD2A5E74C4F7FF082C4D97F1EDF880 
Running SHS test 2 ... passed, result=D2516EE1ACFA5BAF33DFC1C471E438449EF134C8
Running SHS test 3 ... passed, result=3232AFFA48628A26653B5AAA44541FD90D690603

You can compare the test results with the values given in Appendix 1 of the FIPS publication for correctness (mksfs will also print an error message if any of the tests fail). If the SHS implementation passes the self-test, mksfs will perform a speed test and display a message along the lines of:

  Testing speed for 10MB data... done.  Time = 31 seconds, 323 kbytes/second 

All SHS tests passed

Note that the speed given in this test is only vaguely representative of the actual speed, as the test code used slows the implementation down somewhat. In practice the overall speed should be higher than the figure given, while the actual disk encryption speed will be lower due to the extra overhead of disk reading and writing.

Footnote [1]: The FIPS 180 publication is available from the National Technical Information Service, Springfield, Virginia 22161, for $22.50 + $3 shipping and handling within the US. NTIS will take telephone orders on +1 (703) 487-4650 (8:30 AM to 5:30 PM Eastern Time), fax +1 (703) 321-8547. For assistance, call +1 (703) 487-4679.


Weak passwords
Despite the best efforts of security specialists to educate users about the need to choose good keys, people persist in using very weak passwords to protect critical data. SFS attempts to ameliorate this problem by forcing a minimum key length of 10 characters and making a few simple checks for insecure passwords such as single word-word passwords (since the number of words of length 10 or more characters is rather small, it would allow a very fast dictionary check on all possible values). The checking is only rudimentary, but in combination with the minimum password length should succeed in weeding out most weak passwords.

Another possible method to improve password security is to force a key to contain at least one punctuation character or at least one digit, as some Unix systems do. Unfortunately this tends to encourage people to simply append a token punctuation character or a fixed digit to the end of an existing key, with little increase in security.

More password issues are discussed in the section "The Care and Feeding of Passwords" above.


Data left in program memory by SFS programs

Various SFS utilities make use of critical keying information which can be used to access an SFS volume. Great care has been taken to ensure that all critical information stored by these programs is erased from memory at the earliest possible moment. All encryption-related information is stored in static buffers which are accessed through pointers passed to various routines, and is overwritten as soon as it is no longer needed. All programs take great care to acquire keying information from the user at the last possible moment, and destroy this information as soon as either the disk volume has been encrypted or the keying information has been passed to the SFS driver. In addition, they install default exit handlers on startup which systematically erase all data areas used by the program, both for normal exits and for error or special-condition exits such as the user interrupting the programs execution.


Data left in program memory by the SFS driver

The SFS driver, in order to transparently encrypt and decrypt a volume, must at all times store the keying information needed to encrypt and decrypt the volume. It is essential that this information be destroyed as soon as the encrypted volume is unmounted. SFS does this by erasing all encryption-related information held by the driver as soon as it receives an unmount command. In addition, the driver's use of a unique disk key for each disk ensures that even if a complete running SFS system is captured by an opponent, only the keys for the volumes currently mounted will be compromised, even if several volumes are encrypted with the same user password (see the section "Password Lifetimes and Scope" below for more details on this).


Data left in system buffers by mksfs

mksfs must, when encrypting a volume, read and write data via standard system disk access routines. This data, consisting of raw disk sectors in either plaintext or encrypted form, can linger inside system buffers and operating system or hard disk caches for some time afterwards. However since none of the information is critical (the plaintext was available anyway moments before mksfs was run, and at worst knowledge of the plaintext form of a disk sector leads to a known plaintext attack, which is possible anyway with the highly regular disk layout used by most operating systems), and since accessing any of this information in order to erase it is highly nontrivial, this is not regarded as a weakness.


Data left in system buffers by mountsfs

As part of its normal operation, mountsfs must pass certain keying information to the SFS driver through a DOS system call. DOS itself does not copy the information, but simply passes a pointer to it to the SFS driver. After the driver has been initialised, mountsfs destroys the information as outlined above. This is the only time any keying information is passed outside the control of mountsfs, and the value is only passed by reference.


Data left in system buffers by the SFS driver

Like mksfs, the SFS driver reads and writes data via standard system disk access routines. This data, consisting of raw disk sectors in either plaintext or encrypted form, can linger inside system buffers and operating system or hard disk caches for some time afterwards. Once the encrypted volume is unmounted, it is essential that any plaintext information still held in system buffers be destroyed.

In order to accomplish this, mountsfs, when issuing an unmount command, performs two actions intended to destroy any buffered information. First, it issues a cache flush command followed by a cache reset command to any DOS drive cacheing software it recognizes, such as older versions of Microsoft's SmartDrive (with IOCTL commands), newer versions of SmartDrive (version 4.0 and up), the PCTools 5.x cache, Central Point Software's PC-Cache 6.0 and up, the more recent PC-Cache 8.x, Norton Utilities' NCache-F, NCache-S, and the newer NCache 6.0 and up, Super PC-Kwik cache 3.0 and up, QuickCache II 4.20 and up, Qualitas' QCache 4.0 and up, and Future Computing Systems Fast! 4.02 and up. Some other cacheing software can be detected but doesn't support external cache flushing. This step is handled by mountsfs rather than the driver due to the rather complex nature of the procedures necessary to handle the large variety of cacheing software, and the fact that most cacheing software can't be accessed from a device drvier.

After this the SFS driver itself issues a disk reset command which has the effect of flushing all buffered and cached data scheduled to be written to a disk, and of marking those cache and buffer entries as being available for immediate use. In addition to the explicit flushing performed by the mountsfs program, many cacheing programs will recognise this as a signal to flush their internal buffers (quite apart from the automatic flushing performed by the operating system and the drive controller).

Any subsequent disk accesses will therefore overwrite any data still held in the cache and system buffers. While this does not provide a complete guarantee that the data has gone (some software disk caches will only support a cache flush, not a complete cache reset), it is the best which can be achieved without using highly hardware and system-specific code.


SFS volumes left mounted

It is possible that an SFS volume may be unintentionally left mounted on an unattended system, allowing free access to both the in-memory keying information and the encrypted volume. In order to lessen this problem somewhat, the SFS driver provides a fast-unmount hotkey which allows an unmount command to be issued instantly from the keyboard (see the sections "Mounting an SFS Volume" and "Advanced SFS Driver Options" above). The ease of use of this command (a single keystroke) is intended to encourage the unmounting of encrypted volumes as soon as they are no longer needed, rather than whenever the system is next powered down. The driver also provides an automatic timed-unmount facility to unmount volumes after they have been left idle for a predefined amount of time, so that volumes mistakenly left mounted while the system is unattended may be automatically unmounted. This ensures that, when the inevitable distractions occur, encrypted volumes are safely unmounted at some point rather than being left indefinitely accessible to anyone with access to the system. Finally, the driver provides the ability to unmount volumes when the smart card which controls them is removed from the card reader.

As an extra precaution, the driver's use of a unique disk key for each disk ensures that even if a complete running SFS system with encrypted volumes still mounted is captured by an opponent, only the keys for the volumes currently mounted will be compromised, even if several volumes are encrypted with the same user password (more information on this is given in the section "Password Lifetimes and Scope" below).


Password Lifetimes and Scope

An SFS password which is used over a long period of time makes a very tempting target for an attacker. The longer a particular password is used, the greater the chance that it has been compromised. A password used for a year has a far greater chance of being compromised than one used for a day. If you use a password over a long period of time, the temptation for an attacker to spend the effort necessary to break it is far greater than if the password is only a short-term one.

The scope of a password is also important. If you use a given password to encrypt a single drive containing business correspondence, it's compromise is only mildly serious. If you use it to protect dozens of disk volumes or a large file server holding considerable amounts of confidential information, the compromise of the password could be devastating. Again, the temptation to attack the master password for an entire file server is far greater than for a password protecting data contained on a single floppy disk.

SFS approaches this problem in two ways. First, it uses unique disk keys to protect each SFS volume. The disk key is a 1024-bit cryptographically strong random key generated from nondeterministic values acquired from system hardware, system software, and user input (see the subsection "Generating Random Numbers" below). The data on each disk volume is encrypted using this unique disk key, which is stored in encrypted form in the SFS volume header (this works somewhat like session keys in PEM or PGP, except that a conventional-key algorithm is used instead of a public-key one). To access the disk, the encrypted disk key is first decrypted with the user password or key, and the disk key itself is used to access the disk. If the same user key were being used to access three encrypted volumes, it would look like this:

 + User  - - - +            + Encrypted volume 1  - - - - - - - - - - - - + 

| +--------+ | decrypt | +----------+ decrypt +--------------+ |
|User Key| -----+-----> |Disk Key 1| -----------> |Encrypted Data|
| +--------+ | | | +----------+ +--------------+ |
|
+ - - - - - - + | + - - - - - - - - - - - - - - - - - - - - - - +
|
| + Encrypted volume 2 - - - - - - - - - - - - +
|
| | +----------+ decrypt +--------------+ |
+-----> |Disk Key 2| -----------> |Encrypted Data|
| | +----------+ +--------------+ |
|
| + - - - - - - - - - - - - - - - - - - - - - - +
|
| + Encrypted volume 3 - - - - - - - - - - - - +
|
| | +----------+ decrypt +--------------+ |
+-----> |Disk Key 3| -----------> |Encrypted Data|
| +----------+ +--------------+ |

+ - - - - - - - - - - - - - - - - - - - - - - +

This scheme denies an attacker any known plaintext to work with (as the plaintext consists of the random disk key). To check whether a given password is valid, an attacker must use it to decrypt the disk key, rekey the encryption system with the decrypted disk key, and try to decrypt the disk data. Only then will they know whether their password guess was correct or not. This moves the target of an attack from a (possibly simple) user password to a 1024-bit random disk key[1]. In the above example, an attacker who had somehow broken the encryption to recover Disk Key 1 would still have no clue as to the identities of Disk Key 2 and Disk Key 3, even though all three volumes use the same user password.

The other way in which SFS tries to ameliorate the problem of password lifetimes and scope is by making the changing of a password a very simple operation. Since the only thing which needs to be changed when a password change is made is the encryption of the disk key, you can change the password in a matter of seconds rather than the many minutes it would take to decrypt and re-encrypt an entire disk. Hopefully the ease with which passwords can be changed will encourage you to change passwords frequently.

Footnote [1]: SFS may not use the entire 1024 bits - the exact usage depends on the encryption algorithm being used.


Trojan Horses

The general problem of trojan horses is discussed in the section "Data Security" above. In general, by planting a program in the target machine which monitors your password as it is entered or your data as it is read from or written to disk, an attacker can spare themselves the effort of attacking the encryption system itself. In an attempt to remove the threat of password interception, SFS takes direct control of the keyboard and various other pieces of system hardware which may be used to help intercept keyboard access, making it significantly more difficult for any monitoring software to detect passwords as they are entered. If any keystroke recording or monitoring routines are running, the SFS password entry process is entirely invisible to them. This feature has been tested by a computer security firm who examined SFS, and found it was the only program they had encountered (including one rated as secure for military use) which withstood this form of attack.

Similarly, the fast disk access modes used by SFS can be used to bypass any monitoring software, as SFS takes direct control of the drive controller hardware rather than using the easily-intercepted BIOS or DOS disk access routines.

Although these measures can still be worked around by an attacker, the methods required are by no means trivial and will probably be highly system-dependant, making a general encryption-defeating trojan horse difficult to achieve.

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT