Tuesday, May 15, 2012

Computer Security: Public Key Cryptography



Two main threats to the security of your information come from the following:
1 
  • unauthorised access to files and programs stored on computer through network connections; and
  • Interception of information you are sending from your computer to others.

The first may involve malicious code (programs) that you download whilst browsing the web. This download could be deliberate because you believe the code is from a trusted source (such as Apple or Microsoft) or unintentional such as code hidden in web-pages or email attachments. The second threat arises when you send files to others, as attachments to emails, or perhaps more significantly, when you are logging in to remote servers.  If your username and password are intercepted while logging into remote servers, then there is a risk that an unknown third party may access your data by posing as yourself.  

The following sections look at basic technical approaches to dealing with these two threats.  There are other threats which are not covered here. Included in these other threats are those in relation to how your information is managed by organisations who have obtained it and how information is collected based on your patterns of usage.  However, this post looks only at methods of restricting access to your system and your communications, not at what others may do with it once they have access to it. The primary technical approach we look at here is the use of digital signatures and certificates to identify organisations and parties you trust and as a means of keeping your communications secret when they are sent across a publicly accessible network such as the internet. The material below is drawn from the author’s own knowledge accumulated from various sources over the years, however, for those who are interested in reading further a bibliography is provided.

Digital Signatures and Certificates

Digital signatures and certificates are used when we need to know if some information or computer code has come from a trusted source or not. They are also used when we want our own communication across the internet to be kept secret, for example, when logging in to internet banking. So how can we make sure that some code we download from the web is safe to run on our computer without installing a virus, damaging our files or secretly stealing our private information?  One method is for programmers to sign their code using a form of encryption to generate a digital signature using a method known as Public key cryptography. Public key cryptography uses a pair of keys which are generated to produce a private key and a public key. The private key is held by the code producer and never revealed to another party, while the public key is freely distributed to anyone using the code.

Encryption done using the private key can be read by anyone using the public key. Reading and verifying who created code is usually managed by applications without the user’s knowledge (for example Mozilla). The public key can also be used to encrypt messages which can only be read by using the private key. This is the method used when sending your username and password to internet banking sites.

Cryptography based on using the same key (rather than a private and public key) is referred to here simply as Secret Key Cryptography or symmetric cryptography.  Public/private key cryptography is referred to as asymmetric cryptography.



How can you then be sure that I am who I say I am? One way is to check that the public key I sent is in fact my public key. Since no-one else can encrypt messages for reading with my public key, this verification should be sufficient. 

Third party organisations are set up to issue digital certificates that the holder of a public key is in fact who they say they are. These certificates are usually signed with the certifying organisation's own key to ensure the certificate is genuine.  

Programs such as the Java runtime and web-browsers maintain a local database of trusted code signers along with information on who each signer is and what verification they have of their key. Each entity from which you are receiving code or encrypted information has an identity created in this database which indicates whether they are a trusted or untrusted signer. 

The database allows for you (or programs you use) to record details of that entity, such as their organisation name, their public encryption key, the algorithm used by that signer and the third party which authorised that key. 

Associated with each key is a digital certificate which authorises that key for the entity, this ensures that the key has been verified as belonging to the signer. These certificates can be provided by a third party or created yourself for signers you can trust. As mentioned, the certificate is usually itself digitally signed with the key of third party issuing the certificate.

Once you have recorded an entity's public key and associated a verification signature with it you can determine if files signed with that digital signature come from a trusted source.

The files are signed incorporating the output of a hashing algorithm which produces a digest (a kind of checksum) based on the characters in the jar file they are signing. If you can decrypt the signature associated with file, you can then check that the code has not been changed by checking the output of the hashing algorithm with output generated by running that algorithm over the file on your machine. This process is illustrated in Figure 1 below.  The code provider wants clients to be able to check who produced and sent the code.  The provider produces a digest from the code using a hashing algorithm.  This digest is then encrypted with the private key and sent with the code.  The client already has the public key (or can easily download it) and uses this to decrypt the digest.  The client then uses the same hashing algorithm as the sender to producing the digest locally. If this local digest matches the decrypted digest then the receiver knows that it was sent by the owner of the private key. As a bonus, the receiver can also be sure the code was not changed or damaged during transit. 



Figure 1: Signing code to be sent across the internet.

Figure 2  shows the process of encoding and sending a message (as a file) using a symmetric key.  The message is passed to an encryption algorithm (program) with a key (a number) that produces an encrypted version of the file.  The file is then sent across the network to the receiver.  The receiver uses the same algorithm (in reverse) and key to decrypt the received file into its original form.  Symmetric encryption algorithms are desirable because they can produce encoded messages that are difficult to ‘break’ (i.e decode without knowing the key) whilst not slowing the computer down too much. The biggest problem with symmetric keys is that you need some way of sharing the key without other’s being able to read it.  One solution to this problem is to use asymmetric encryption which we look at next.  



Figure 2: Symmetric Encryption.

Asymmetric encryption is similar to symmetric, except it uses special algorithms that allow the use of two mathematically related keys.  The relationship is such that data encrypted using a public key can only be decrypted using the private key (and vice-versa).  This depicted below.  However, the algorithms to encrypt and decrypt asymmetric keys are very resource intensive and could cause serious delays on servers dealing with hundreds or thousands of clients over a short period. 



Figure 3: Asymmetric encryption.

The fact that the asymmetric nature of public/private keys also works in reverse solves the main problem of symmetric communication i.e of how to secretly share a symmetric key. The problem is solved by the client encoding a symmetric key using the server’s public key and then providing the server with this before communications starts. This key can only be read by the server who uses it to establish communications using a cheaper symmetric encryption agreement.  This process is depicted as Model 1 in Figure 4.


Figure 4: Private-public key communication.

An alternative model is to use a third party server such as Kerberos.  This process is shown as Model 2 in Figure 5. It replies on a trusted third party (the Kerberos server) holding copies of each entity’s session keys. When two or more entities want to communicate secretly one of them asks the Kerberos server to create a key they can all share just for this communication session. The Kerberos server then creates encrypted session keys upon this request. A copy is made for each participant and encoded using their stored entity key. The session keys are then distributed between the communicating entities. Each entity then decrypts the Kerberos message using its own symmetric key to access the session symmetric key.


Figure 5: Kerberos Communication.

In addition to the standard HTTP protocol, most web-servers (and browsers) also support the Secure Sockets Layer (SSL) protocol using HTTPS. 

SSL allows you to communicate with a web server using public and private keys to establish communication using symmetric encryption.



Bibliography


Kerberos: An Authentication Service For Computer Networks Neuman, B.C and Ts'o, T USC/ISI Technical Report ISI/RS-94-399.



Potential Solutions to Technological Problems




It has become fashionable to say that where science and technology have created problems, it is only more scientific understanding and better technology that can carry us past them. The cure for bad management is more management. The cure for specialised research is more costly interdisciplinary research […] The pooling of stores of information, the building up of a knowledge stock, the attempt to overwhelm present problems by the production of more science is the ultimate attempt to solve a crisis by escalation”.  

Illich, I 1973 Tools for Conviviality, Fontana/Collins.


I start by looking at EF Schumacher on Technology for a Democractic Society (printed in McRobie, 1981) talking about the Concorde project at that time (late 1970’s):

“The proper question to ask is, “Is it a very intelligent development in terms of the energy situation of the world? You have to give the answer, I don’t. Because that is a big problem. Is it a good thing in terms of environmental quality? It may be, of course; people may say the environment is greatly improved by the sonic boom. Is it appropriate technology in terms of fighting world poverty? Does it help the poor? Is it an appropriate technology from a democratic point of view? Perhaps getting a greater equality among people? You can take every single problem of this society, and you have to ask, is the technological development appropriate? Or is it some sort of little boy’s engineer dream? We can do it, so lets do it.”  (Pp 8-9) 

  (In a modern context this may also bring to mind the motivation for Facebook as portrayed in the film ‘Social Network”).

Schumacher goes on to recommend an extension of the “technology assessment” conducted by organisations, which often looks at questions such as whether technologies are profitable etc.  He proposes that a question be added along the lines of: “Is it relevant to the real problems of mankind?” Of course, it appears that these days such questions are even more difficult to answer, as their adoption and use patterns are unpredictable (Harford, 2011). It might be a useful technology, but not take off. Or it might not be intended to address problems of mankind, but might end up being used in that way in any case (for example, Twitter’s role in the Arab spring, YouTube as a means of documenting and spreading news of barbarous acts by governments etc; Doctorow 2012). But of course, even in these cases, once a beneficial use is identified, more might be done to assist in dealing with the “problems of mankind”. Such as ensuring better protections of whistleblowers and those documenting atrocities from the subsequent tracking down using the histories and links provided by the technological tools they use (Doctorow, 2012). 

Schumacher thinks real genius makes things (such as technology and production) simpler, not more complex (which is easily done in his opinion). It certainly seems that coming up with simplicity is a rare skill. Apple’s success is arguably due to its ability to make technology simple to use.  Facebook, when first released was a remarkably simple and basic tool with a very simple interface. This has of course become more complex over time, perhaps highlighting Schumacher’s point that anyone can make something more complex, and many are now doing this to Facebook. Schumacher also links complicated technology and processes with many other social problems. The need for large numbers of highly specialised experts tends to force work and development into large cities, stripping other areas of both opportunity and talent as well as contributing to all the congestion and related problems in cities (McRobie 1981). Of course, technology does offer the potential to telecommute.  But this also has its critics, who attribute some problems, including major industrial disasters, to some degree, to physical distances. Take for example the thoughts of Heffernan (2011):

"Technology can maintain relationships but it wont build them. Conference calls, with teams of executives huddled around speaker phones, fail to convey personality, mood and nuance. You may start to develop rapport with a person who speaks the most - or take an instant dislike to them. But you'll never know why. Nor will you perceive the silent critic scowling a thousand miles away. Video conferencing distracts all its participants, who spend too much time worrying about their hair and whether they are looking fat, uncomfortable at seeing themselves on screen. The nervous smalltalk about the weather - its snowing there? Its hot and sunny here - betrays the anxiety about the vast differences that the technology attempts to mask." (pg 220)

"Physical distance isn't easily bridged, no matter how refined the technology. Instead we delude ourselves that, because so many words are exchanged - email, notes and reports - somehow communication has taken place. But that requires, in the first instance that the words be read, that they are understood and that the recipient knows enough to read with discernment and empathy […] It's extremely hard to communicate well with people you don't really know whose concerns you cannot see." (pg 220)

Heffernan (2011) strengthens this point by drawing on recent psychological research that demonstrates how various distances negatively affect judgments and empathy. Heffernan (2011) continues to talk about the problems of managing our organisations more generally as follows:

"Why do we build institutions and corporations so large and so complex that we can't see how they work? In part, it is because we can. Human hubris makes us believe that if we can imagine something; and if we can build it, we can understand it. We are so delighted with our own ingenuity and intelligence it gives us a sense of mastery and power. But the power is problematic as it takes us further and further from the reality we have built. Like Daedalus, we build labyrinths of such cunning complexity that we cannot find our own way out. And we are blind to the blindness these complex structures necessarily confer. So we forget all about it." (pg 239).

Heffernan’s theme in the above seems to be that we have created something we can longer control. Illich (1973) has a similar take on this but relates the problem as one closely linked to technology:

 “The Hypothesis was that machines could replace slaves.  The evidence shows that, used for this purpose, machines enslave men.  Neither a dictatorial proletariat, nor a leisured mass can escape the dominion of constantly expanding industrial tools.” (pg 23)

Illich (1973) proposes a solution similar to that mooted by Schumacher (of “intermediate” technology):

“People need new tools that ‘work’ for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves.” (pg 23)

Illich (1973) continues on to relate this ‘enslavement’ to society more generally:

“People who have climbed up the ladder of schooling know where they have dropped out and how uneducated they are. Once they accept the authority of an agency to define and measure their level of knowledge, they easily go on to accept the level of appropriate health or mobility. It is difficult for them to identify the structural corruption of our major institutions. Just as they come to believe in the ‘knowledge stock’ they acquired in school, so they come to believe that higher speeds save time and that income levels define well-being, or, as an alternative, that the production of more services, rather than more goods increases the quality of life”. (pp 32-33).

Illich (1973)’s argument now starts to converge more closely to the final quote of Heffernan (2011) above:

“The institutional definition of values has made it difficult to focus our attention on the deep structure of social means. It is hard to imagine that the division of sciences, of labour and of professions has gone too far.  It is difficult to conceive of higher social effectiveness with lower industrial efficiency.” (pg 33)

Interestingly, Illich (1973) describes the types of tools that he thinks are ‘convivial’ (desirable) as follows:

“Tools foster conviviality to the extent to which they can be easily used, by anybody, as often or seldom as desired, for the accomplishment of a purpose chosen by the user. The use of such tools by one person does not restrain another from using them equally. They do not require previous certification of the user. Their existence does not impose any obligation to use them. They allow the user to express his meaning in action”. (pg 35)

This concept of a ‘convivial tool’ as one which is available to anyone as often or seldom as desired is in direct contradiction to the concepts of property which typically apply to tools, such as exclusive use and ownership. Illich (1973) also links his discussion quite clearly to concerns around social organisation and another author, more recently, raises these same themes in relation to Open Source development. Weber (2004) opens his book on Open Source Development with the following statement:

“This is a book about property and how it underpins the social organisation of cooperation and production in a digital era. I mean “property” in a broad sense – not only who owns what, but what it means to own something, what rights and responsibilities property confers and from where those ideas come from and how they spread.” (pg 1)

Having established that he is looking at Open Source as set of issues about property and social organisation more generally, Weber (2004) explains his notions of property as follows:

 “The conventional notion of property is, of course, the right to exclude you from using something that belongs to me. Property in open source is configured fundamentally around the right to distribute, not the right to exclude. If that sentence feels awkward on first reading, that is a testimony to just how deeply embedded in our intuitions and institutions the exclusion view of property really is.” (pg 1)

Weber (2010) elaborates his broad perspective on Open Source as follows:

“Open source is an experiment in building a political economy – that is a system of sustainable value creation and a set of governance mechanisms”

In relation to what insights open source offers on society, and organisations in particular, Weber (2004) contrasts the established mindset with the challenges presented by the digital economy by describing some counter-intuitive phenomena around Open-Source software development (eg: Linux as "the impossible public good" Smith and Kollock (1999)). He talks about how ideas on organising production are largely about ideology. The standard answer on how to organise large complex projects is to arrange a large centralised, hierarchical firm with formal structures of authority (i.e a bureaucracy). However, such industrial-style organisation has the problems of the expense and awkwardness of moving information and knowledge around as well as monitoring the actions of individuals and enforcing decisions. He then suggests that the software produced by such an organisation is buggy and expensive - more buggy and more expensive (to the final user) than many popular Open-source software products.

Weber (2004) continues on to compare the traditional firm with an "ideal type" of open source project and notes the following contrast: "Each person is free to choose what he wishes to work on or to contribute. There is no consciously organised or enforced division of labour" (pg 62). One interesting point here is that in practice there seems to be no "one-way" of organising and running a successful, large scale open-source project. The organisational structures and decision making processes vary considerably across projects, compare for example the organisation and process around Apache (the most predominant web-server on the internet) with those around Linux (the very popular and reliable operating system). One is highly structured with a large group of leaders (the core) versus the less structured, more organic Linux project based around the leadership of Linus Torvalds, which is accepted voluntarily by the members of the project, although there is nothing to prevent anyone starting their own Linux project using what is already available. So to what then can the success of Open-source be attributed?.

References:

Doctorow, C. 2012. The internet is the best place for dissent to start. The Guardian, 3 Jan. <http://www.guardian.co.uk/technology/blog/2012/jan/03/the-internet-best-dissent-start>

Harford, T 2011. Adapt: Why Success always starts with failure. Little, Brown

Heffernan, M. 2011. Wilful Blindness: Why we Ignore the Obvious at Our Peril. Simon & Schuster.

Illich, I 1973 Tools for Conviviality, Fontana/Collins.

Mc Robie, G. 1981. Small is Possible. Abacus.

Smith, M.A & Kollock, P. 1999. (eds) Communities in Cyberspace. Routledge, London. p. 230.

Weber, S 2004. The Success of Open Source. Harvard University Press

Information and Meaning in a Social Context



Marcuse (1964) argued that in society - even back then - there was an emerging pattern of one-dimensional thought.  This pattern of thought reduces ideas which transcend the established universe of discourse and action to terms of the established universe, or it repels them. He suggests the trend to one dimensional thought may be related to developments in scientific method which are evident as operationalism in the physical sciences and behaviouralism in the social sciences.  The common feature of both these is a total empiricism in the treatment of concepts; the meaning of concepts is restricted to the representation of particular operations and behaviour. 

Marcuse provides a variety of examples on this idea. One is the use of acronyms such as NATO and UN.  He suggests that such abbreviations may help to repress desired questions:
“NATO does not suggest what North Atlantic Treaty Organisation says, namely a treaty among the nations on the North-Atlantic – in which case one may ask questions about the membership of Greece and Turkey […] UN dispenses with undue emphasis on ‘united’ […] and the AEC is just one administrative agency among many others. The abbreviations denote that and only that which is institutionalised in such a way that the transcending connotation is cut off. The meaning is fixed, doctored, loaded. Once it has become an official vocable, constantly repeated in general usage, ‘sanctioned’ by the intellectuals, it has lost all cognitive value and serves merely for recognition of an unquestionable fact.” (pg 84)

Marcuse (1964) nows links this to his earlier idea that terms are defined by operations and behaviour:
“This style is of an overwhelming concreteness. The ‘thing identified with its function’ is more real than the thing distinguished from its function and this [….] creates a basic vocabulary and syntax which stand in the way of differentiation, separation and distinction. This language, which constantly imposes images, militates against the development and expression of concepts. In its immediacy and directness, it impedes conceptual thinking; thus it impedes thinking. For the concept does not identify the thing and its function […] the functionalised, abridged and unified language is the language of one-dimensional thought.” (pg 84-85)

The effects described by Marcuse above are subtle and difficult to appreciate at first glance (at least they were for me!).  The following perhaps helps understand what he is aiming at, as it relates well to contemporary marketing language:

“[…] familiarity is established through personalised language, which plays a considerable role in advanced communication. It is ‘your’ congressman, ‘your’ highway, ‘your’ favourite drugstore, ‘your’ newspaper; it is brought ‘to you,’ it invites ‘you’ etc. In this manner, superimposed, standardised, and general things and functions are presented as ‘especially for you.’ It makes little difference whether or not the individuals thus addressed believe it. Its success indicates that it promotes the self-identification of the individuals with the functions which they and the others perform.” (pg 82)

In this sense we can easily relate the above to: “MySpace”, “MyKi”, “MySchool”, “YouTube” etc so prevalent, particularly in the digital world. You might well ask of what significance this is, Marcuse (1964) argues that: “such unified, functional language is an irreconcilably anti-critical and anti-dialectical language. In it, operational and behavioural rationality absorbs the transcendent, negative, oppositional elements of Reason.” (pg 86).

Marcuse (1964) continues to explain that such: “‘closed’ language does not demonstrate and explain – it communicates decision, dictum, command. Where it defines, the definition becomes “separation of good from evil”[1]; it establishes unquestionable rights and wrongs, and one value as a justification of another value. It moves in tautologies, but the tautologies are terribly effective ‘sentences’”. (pg 89).
In short, he argues that this, and other features he describes, make such language rather similar in effect to Orwellian Newspeak. Marcuse quotes the equation: Reason = Truth = Reality, which he claims “joins the subjective and the objective into one antagonistic entity.” (pg 105). Apart from its topic of the use of information and meaning, this theme is poignantly relevant to this unit in the sense that Marcuse relates this manipulation of language with the technical apparatus of production, distribution and automation; arguing that such systems cannot be isolated from their social and political effects (see pg 13). In this sense, we can view the above as a claim of one single social effect (there are no doubt many others) linked to the use of technology in society.  Marcuse (1964) in fact identifies a number of social effects which he identifies as outcomes of our development as a technological society. These include ‘free’ institutions and ‘democratic liberties’ that are actually used to restrict freedom, repress individuality, disguise exploitation and limit human experience. Interestingly, a very similar claim has been recently made, and clearly articulated in Monbiot (2011): 

“Modern libertarianism is the disguise adopted by those who wish to exploit without restraint. It pretends that only the state intrudes on our liberties. It ignores the role of banks, corporations and the rich in making us less free. […] By this means they have turned “freedom” into an instrument of oppression.”

Marcuse (1964) also laments the effects of positivism which he believes encompasses the following concepts: “(1) the validation of cognitive thought by experience of facts; (2) the orientation of cognitive thought to the physical sciences as a model of certainty and exactness; (3) belief that progress in knowledge depends on this orientation. Consequently, positivism is a struggle against all metaphysics, transcendentalisms, and idealisms as obscurantist and regressive modes of thought.” (pg 140).

He continues to say:

“Much of that which is still outside the instrumental world – unconquered blind nature – now appears within the reaches of scientific and technical progress. The metaphysical dimension, formerly a genuine field of rational thought, becomes irrational and unscientific.” (pg 141)

Thus it appears that like Schumacher (1977) – whom we looked at in the first lesson – Marcuse (1964) is lamenting the loss of metaphysics and connects it to an impoverished humanity suggesting that this keeps man from “orienting himself in the given environment” (pg 141). Summing up as follows:
“when Hume debunked substances, he fought a powerful ideology, while his successors today provide an intellectual justification for that which society has long since accomplished – namely the defamation of alternative modes of thought which contradict the established universe of discourse”. (pg 141).  
Along the same lines Jensen and Draffan (2004) quote modern philosopher Stanley Aronowitz as follows:

“For some scientists, everything outside the box – defined by the rules of scientific discourse – must be ignored. And they often get very agitated when you call them on the game they are playing [which is] Religion. Teleology. Control. The desire for prediction, and ultimately the desire to control the natural world, has become the foundation of their methodology of knowing truth”. (pg 40)

Aronowitz also states:

“ [...] if you can convince people that science has a monopoly on truth, you may be able to get them to believe also that the knowledge generated through science is independent of politics, history, social influences, cultural bias, and so on.” (pg 25) Jensen concludes: “And in the bargain you get them to doubt their own experience”.

This accusation of having created a system that excludes certain ways of thinking and communicating is not limited to Marcuse and Aronowitz.  Whereas Marcuse (1964) associates the manipulation of language with “neo-conservatism and neo-liberalism” (pg 89), Kozy (2012) makes a similar argument that the models used by neo-classical economists manipulate our understanding. Kozy (2012) argues that the models used by economists are abstractions. This is probably true of nearly all models of any complex system, however, his claim extends further to an accusation of deliberate simplification of complex real-world scenarios in which complexities are purposefully omitted – along the lines of the simplifications of thought identified by Marcuse, but more brazen – to favour of a desired conclusion which is given with a degree of certainty beyond what is possible in a complex changing environment. Thus Kozy (2012) claims that the use of externalities and of specific circumstances (which are not taken into account in economists’ general theories) are in fact a process of deliberate subtraction of factors which could influence the model and the conclusions based on it:  

“Economists build models by what they call ‘abstraction.’ But it's really subtraction. They look at a real world situation and subtract from it the characteristics they deem unessential. The result is a bare bones description consisting of what economists deem economically essential. Everything that is discarded (not taken into consideration in the model) is called an "externality." So the models only work when the externalities that were in effect before the models are implemented do not change afterward.”

This idea of simplification appears yet again in an argument by Jensen and Draffan (2004). Again the accusation echoes the criticisms of Marcuse (1964) who accuses our society of a flawed concept of rationality; Jensen and Draffan (2004)’s view can be captured with the following quote:

“there does happen to be one definition under which our culture is as rational as it pretends to be, which is that rationalisation is the deliberate elimination of information unnecessary to achieving the immediate task […] to make this slightly more specific: If your goal is to maximise profits for a major corporation, all you need do is ignore all considerations other than that.  If your goal is to maximise gross national product (that is, the rate at which the world is converted into products), then all you need do is ignore everything that might stand in the way of production. As we see.” (pg 91)

 This is a very similar argument to Kozy (2012)’s:

employment alone is not a sufficient condition for prosperity; full employment can exist in an enslaved society along side abject poverty, and an increasing GNP does not mean that an economy is getting better.”

This is particularly interesting in relation to modern corporations which collect and process enormous amounts of data from their customers (for one example see here).  Such data collection and monitoring will also give a very narrow picture of the impacts of the organisation, particularly if this is the focus of the organisation’s efforts to determine its effects on the people and society around it (see the activity in relation to this).

We certainly have more means of collecting and processing data than ever before in human history.    The most useful purpose (beyond automatically tagging people in Facebook photos) seems to be to assist police in tracking and finding criminals (based on the list in Moses, 2012).  In the recent newspaper article (Moses, 2012)  it was touted that the latest facial recognition software could identify 11 out 12 “persons of interest” out of “4000 passengers from all over the world.” Professor Brian Lovell from the University of Queensland is quoted as saying:

“What we specialise in is non-cooperative surveillance, that means the person doesn't have to be aware that they are being photographed to be recognised.”

Jensen and Draffan (2004) are particularly concerned about this type of surveillance. They quote Oscar Gandy’s  1990’s description of the “panoptic sort” as follows:

“the complex technology that involves the collection, processing, and sharing of information about individuals and groups that is generated through their daily lives as citizens, employees and consumers and is used to coordinate and control their access to the goods and services that define life in the modern capitalistic economy” (pg 112)

Winner (1992) presents a number of suggestions in relation to the problems created by technology.  The most pertinent of which appears to be the following:

“.. I could suggest a supremely important step – that we return to the original understanding of technology as a means that, like all other means available to us, must only be employed with a fully informed sense of what is appropriate. Here, the ancients knew, was the meeting point at which politics, ethics and technics came together.  If one lacks a clear and knowledgeable sense of which means are appropriate to the circumstances at hand, one’s choice of means can easily lead to excess and danger” (pg 327).   

Strangely this is an almost identical recommendation to that given independently by Schumacher and Illich in the 1970’s, as discussed in an earlier post.

References:

Jensen, D & Draffan, G 2004, Welcome to the Machine: Science, Surveillance and the Culture of Control, Chelsea Green Publishing.

Kozy, J 2012, ‘Abstractions Versus the "Real World": Economic Models and the Apologetics of Greed’, Global Research, 13 Feb (available here).

Marcuse, H 1964 One Dimensional Man, Sphere Books. (abridged version available here).


Monbiot, G 2011 How Freedom Became Tyranny. 19 Dec. <http://www.monbiot.com/2011/12/19/how-freedom-became-tyranny/>

 


[1] Noam Chomsky argues that official definitions of terror were revoked in the US as they could not be constructed such that they excluded US military action. See: Chomsky, N 2003, Hegemony or Survival, Allen and Unwin.