Autonomous Archives: 08 Matthew Fuller
Cinematographer: Nisha Vasudevan
Duration: 00:44:05; Aspect Ratio: 1.778:1; Hue: 20.460; Saturation: 0.106; Lightness: 0.364; Volume: 0.190; Words per Minute: 135.737
Summary: Properties of the Autonomous Archive
, a 2-day event, hosted by CAMP, was a gathering of key internet platforms, archival initiatives and related infrastructures.
The discussion was intended to focus on the qualities and powers of contemporary archives: including their stable or emergent properties, their performance and beauty, survival and capacity, and autonomy.
"In declaring their autonomy, archives seek to produce norms beyond normativity, and ethical claims beyond the law."
- excerpts from Pad.ma, Ten Theses on the Archive
, no. 9.
Day one was a day of presentations and discussions: "Show me your Properties!"
: Jan Gerber and Sebastian Lutgert - 'people annotate describe make add'
: Kenneth Goldsmith - 'If we had to ask permission, we wouldn't exist: a brief history of UbuWeb and the law'
04 SFG (Shared Footage Group)
: 'Its past and future'
05 Sundar and Gurung
: 'Archiving in the vernacular, experiences from Tamil and Nepali'
06 Rochelle Pinto
: 'The mundane state - historians in a state archive'
07 Peter S. - flattr
: 'Flattr, the need for alternative financial views'
08 Matthew Fuller
: 'Two evil media stratagems: Structured data & Know your sorts'
09 Liang and Lutgert - Leaks
: 'Privacy and Scandal: Radia tapes and Wikileaks'
Matthew Fuller (UK) is an artist, author and lecturer. He is a Reader at Goldsmith's College, University of London; and he teaches Media and Culture at other programs such as Transmedia. Fuller's cultural investigations include Media Ecologies and Software Studies. http://www.spc.org/fuller/
Books authored by Matthew include:
'Behind the Blip: Essays on the Culture of Software' (Autonomedia, 2003).
'Media Ecologies: Materialist Energies in Art and Technoculture', (MIT Press, 2005)
‘Softness, interrogability, general intellect, art methodologies in software’, (Media Research Centre, Huddersfield, 2006)
NM: And now we have Mathew Fuller who's with Goldsmiths, London and I think we are going to from our 2 favourite archival projects to 2 'Evil Media Strategems'.
Max Mueller Bhavan, Mumbai
MF: Yeah, in a sense it kind of follows the last two presentations relatively well. What I'm trying to talk about is kind of pragmatics, subsets - more pieces, what Rochelle was talking about in terms of the mundane power. I'm interested in the question of the mundane power of the technical in these two presentations.
MF: So this is a presentation of two pieces of writing from a larger project which I'm doing with a philosopher and a programmer Andy Goffey which is gathered under the heading 'Legal Media'. We've written a number of texts on this line. These two form the kind of cover version(?) of this.
MF: The perspective of the project is to take Google-ith maxim - 'Don't be Evil' and think about what would you do if you were going to be evil in the present era. What would be different actually from what Google or most of the web companies do? Would you need to need to do anything more? Or would you just observe what they do in a certain way and draw out something about the kind of point of view in the contemporary era?
MF: So its about a kind of pragmatics, its about a kind of amorality as a form of evil in the present era.
MF: So we cover the grey media of information systems. We look at management, we look at audit systems, we look at psychology and drugs to treat psychology. We look at minor cases of affect or irritation - major affect like fear and love and so on. I'm interested in how formalisms drawn out, generally from logic are integrated into social and cultural processes such as archives in the present era.
MF: So its about power taking on a kind of processuality. But its also about power's known clarity-obfuscatory nature, its tedium and especially the kind of powerful quality of stupidity in the present era- which seems to be the kind of dominant mode for this century.
MF: So... the literary material we draw upon is Schopenhauer's 'How to Win Arguments' which is a kind of set of techniques on destroying your enemies in arguements - highly worth reading. Machiavelli's 'The Prince' of course. Balthasar Gracian's work on the value of life. It also draws upon self-history and then its kind of ephemerally called the Jesuit code(?)
MF: I want to present 2 strategems from this project. I will use the form of the strategem because its small, kind of epigrammatic texts that are about ... that are inherently kind of rhetorical technologies in themselves, and they have a particular kind of affordance of a particular kind of play with language.
MF: These 2 are particularly about stuff of the archive in the present era- in the era of software and the era of computational and networked digital media. And we are trying to think about what kind of agency these things have. The 2 strategems are: first is Structured data, which is about Data Structures. The second one is Know Your Sorts, and its about sorting algorithms.
MF: Structured data - The concept of the algorithm has been pressed into service in recent years for thinking through processes that would not at first blush be considered to have any of the features that computer scientists would associate with their core area of expertise: the dynamics of particular kinds of cultural forms, social networking or brand behaviour, for example.
MF: Extended in this way, the algorithm, as an idea, thus becomes an "analyzer" of social relations, particularly for processes that don't appear to have any obvious, explicit implementation as such.
MF: Of course, the algorithmic policiies of particular kinds of social relations, grey media objects and the interfaces they have with the everyday world, are readily evident in a number of fields – the filtering of social housing, night club bouncers operating a door policy, putting together a burger, sorting a mate (on a dating site), all attest to the algorithmicity of social life, as do the routine workarounds used to deal with poor software. (That is the kind of social engineering about the archives that Rochelle was talking about in terms of archives.)
MF: And the algorithm has a particularly seductive quality: the effectivity of a finite set of steps for solving problems, and the tacit assumption that the specification of an algorithm can change, points to a certain formal level of control.
MF: Considerably less attention has been paid by sociologists, media and cultural theorists and the like to the near indispensible correlate of the algorithm, that without which – actual, explicit and formal or virtual, implicit and informal - an algorithm would have nothing with which to work: the data structure.
MF: Part of the interest in considering things algorithmically has to do with their imitability, their replicability (in the sense that an algorithm extracts an identical, often apparently optimised set of steps from a process) and the inexorability of their finite progress towards a termination point.
MF: Imitate. Replicate. Terminate. Optimally. An algorithm is sometimes considered the sum of logic and control. Equally, a program may be considered the sum of algorithms and data-structures. (Both of these are famous texts in computer science.)
MF: But if a controllable universe is a sum of programs, (a summa programmae,) then data structures are essential to the effective operation of processes of control, and as such are an equivalently important stratagematic "analyzer" of social relations.
MF: Other stratagems we offer in the Evil Media project have drawn on the complex interplay of the formal and the empirical evident with the diffusion of programmable mediation throughout culture, and attended in particular to the tacitly prescriptive nature of specific media forms (e.g. the process of regularizing expression, which forms a way of ensuring the existence of parseable, structured pattern).
MF: The requirement that data be structured extends this into the more general abstract architecture of social and cultural practices organized by, for and as information. Here, questions of mnemonic practices such as archives are core. If algorithms work most efficiently with well-designed data structures, then the organization of people and things in a manner commensurate with these structural forms will facilitate the smooth flow and control of data.
MF: Let's consider the most basic operation of an algorithm, one for sorting, say: as discussed in (what I'll talk about next) the stratagem 'know your sorts'. Sorting algorithms are typically used to arrange lists of data such as numbers, names, credit codes or other intimacies. Many programming languages natively provide the array data structure, which allows you to store collections of elements in one contiguous set of spaces in memory (so you might declare an array of type integer or float, for example).
MF: If your sorting algorithm was for sorting whole numbers, you might pass it an array of integers to work with. However, the array might not be the best data structure to use in a specific case. Arrays allocate memory in advance, so you need to be able to say how big your list will be. What if you don't know how big the list is going to be, or if its size will change? It would be like renting a warehouse for an archive without any clear idea of the amount of stock you need to hold.
MF: Too big and you have excessive overheads. Too small and your precious documents have no home. Instead, you might choose to use a linked list, which is a bit like an array except that (since each item contains a link to the next) it can grow or shrink as needs require, and doesn't require the advance allocation of contiguous blocks of memory.
MF: The point is that data structures help optimize the operations of algorithms by providing a way of organizing memory – computational, living, or material memory - so as to extract the data located in them as rapidly and effectively as possible (arrays are quicker than linked lists, for example in which one item in the list is linked directly to the next) Different kinds of data structure provide different ways of organizing that memory, ways that are more or less appropriate to the task in question. This is as true of a task performed by humans as by machines.
MF: Conversely, restructuring an environment without consideration for the underlying implementation of data extraction processes thwarts productivity: virtualise an office into a paperless universe without due consideration for the operations dependent on visually easy to identify stacks of papers and files or face to face conversations (now distributed across an intranet, say) and the speed of data retrieval will plummet.
MF: Other forms, more suited to such kinds of materialization need to be invented and scrutinized. As quasi-cybernetic computational entities, formally and informally specified data structures and the operations that can be performed on them, become critical mediators, and the space of lived experience is riddled through with a complex mix of virtual and actual affordances and opportunities.
MF: So if we think of the archive, we kind of think of the Index card- the index as a kind of classic form of data structure; what I'm trying to do is kind of draw out a politics of the data structures that are circulated in digital system. One example can be the user, the reader, the payments broker in a more high level of abstraction in the data structure.
MF: A data structure forms a sort of intermediate level, an abstraction mechanism, in the process of addressing machine memory. - An operating system will ultimately take responsibility for how memory is allocated, but a data structure offers a way of abstracting from things so as to organize them for processing most effectively, without the programmer needing to know the specific details of how a particular type of machine organizes memory allocation.
MF: This process of abstraction from the specifics of machine addressable memory is extended by the notion of the abstract data type, which defines purely theoretical kinds of data structure and specifically the operations that can be performed on them: containers, double-ended queues, multimaps, priority lists, heaps, trees are all forms of abstract data type (ADT).
MF: The implementation of an abstract data type entails a form of black boxing, in the sense that nested within the ADT there may be a range of different data structures operating, the details of which will be hidden. Equally, different implementations of the same abstract type could entail different data structures. I use a diary, you use a personal organizer. I pile my papers on the desk. You put yours in a filing cabinet. I use memory sticks, you use a networked drive, but we are both working.
MF: The development of thinking about data and data structures, about abstraction and abstraction mechanisms in computing science offers precious indicators for the well-crafted development of control.
MF: Generally, the movement is in the direction of the creation of ever newer layers of abstraction – greater degrees of distance from concrete "machine addressable memory" - and hence creative of forms of mediation. These layers of abstraction have their own operatives and experts to be nurtured – management consultants, facilitators and knowledge engineers having a particular interest in the shaping of new abstract data types. (We can say that archivists also operate in this kind of domain.)
MF: But as abstraction might be understood as a kind of selective forgetting, data structuring can produce difficulties: a particular kind of ignorance about implementation details, which leads humans to a kind of infernal alternative: refashion yourself to meet the imperious demands of more and different types of data, or consign yourself to data oblivion.
MF: Alternately, life may be enhanced as a set of more or less well linked data structures, that with each synchronizing beat of the ever more rapid CPU clock, problems can undergo process, sort, filter, queue, at ever improved rates. (We can think of examples of American positive thinking techniques where you try and reformat human psyches to be more close to data sets - getting things done for example,)
MF: It is not difficult to realize how important it is to insinuate oneself into the virtual abstract data type of the person: try to work with the glut of email that is a characteristic feature of many lives, without appropriate configuration (file hierarchies and so on) and life quickly sinks under a welter of unread and unknown obligations. Here, hell is not - as Sartre thought - other people, it is an unstructured inbox, or an unmaintanable repository. The ongoing exteriorisation of memory, initiative, structure and spontaneity in gadgets, applications, devices and networks requires and induces this structuring of data, both in the explicit computational sense and in the implicit architectures of everyday life.
MF: Now we're moving on to the 'Know your Sorts' text.
MF: Sorting takes a sequence of entities and permutates that sequence in order to arrive at a result which renders it more useful. As a strategematic force in itself therefore, sorting should be understood both as something that yields results, in the form of a ranking, and as something that generates its own terms of composition, shifting relations between things that are sorted in ways that imply multiple kinds of use and attention.
MF: Amongst other qualities, permutation, the process of the shifting and sifting of the order of things, has its own aesthetics which renders it distinct from the conceptual lock-down of nominally Platonic essentialism favoured in certain kinds of mathematically grounded accounts of software. (By this we're trying to shift the emphasis on the aeshetics of software to think a bit more processual- rather than a more kind of the (?) Platonic mould in which computer science tends to see aethetics.)
MF: Such an aesthetics establishes a vivid dynamic of interplay between algorithms, the machinic context of hardware and software resources and the data which is being handled, all of which makes demands on the other and combine to render each permutational process individual.
MF: Further iterations and enfoldings of sorting in other media, such as social processes, make it particularly interesting. In such a context, knowing your sorts, gaining a sense of the aesthetic dimensions of ordering is crucial. But aside from the way in which it engages the sensorial aspect of being, sorting has a profound and intricate relationship to systems of ordering.
MF: Amongst these, sorting is something distinct from categorization, to which it is naturally affiliated (and from enumeration). Categorisation may be a result of a sort, and categories may also be sorted, (and if we think about the kind of Gujarati census data- this question of sorting algorithm works on that material becomes crucial) Categorisation may be the result of a sort, but categories may also be sorted, but it is the permutational moment and the kinds of power it produces and invokes that we are concerned with here.
MF: As a distinct field of thought, computer science usefully maintains intellectual and technical reserve towards its application, its wider place in the world. As such it maintains relations of pretended universality, in that everything finds it place in computation, but from which computation) also establishes a separateness.
MF: In such a context of autonomy, sorts are evaluated in terms of the optimal use of resources in both processing code and handling data during runtime, and in the speed of execution in relation to different sorting problems. In the field, such questions of optimality may be complicated by those of the efficiencies of management, but since most sorting algorithms are readily available within standard libraries such forms of interference tend not to coincide.
MF: Within computing's articulation of sorts material is generally handled via an alphanumeric key which maintains relation to the records, numbers or other single or clustered entities that it in turn is able to treat as satellites. (So we see a distinction between the logisitic and the semantic markup when we're talking about markup languages. This is another example, and why by tokens or by keys which are then sorted.)
MF: What is sorted then is not at first the 'primary' data, such as a record or file, or what it may refer to, such as an event or a person. What sorting first acts upon is the numeric values by which they are handled. Once these are organised, sorting can concatenate out. As such, a general literacy of sorting is to be recommended.
MF: The arguments against instrumental reason (such as those of Horkheimer and so on), averring that it is one more form of knowledge which subordinates means to ends, is usefully transformed by other forms of sorting in which not the numeric handler but rather the data itself are understood to have an intrinsic and indexical relation to things in the world.
MF: "Social sorting" as it is termed by scholars in critical surveillence studies (that is implemented by the British police - stop under suspician mode and so on) adopts a mode of sorting in which mechanisms for the management of entitlement, control and protection are deployed to maximise efficiency, convenience and speed.
MF: Opponents of sorting tend not to concern themselves with the underlying logic of such rules (so when the police in Bombay are saying that such and such area is a VIP area and certain people are allowed to enter, the particular mechanisms by which they do that sorting aren't seen to be significant, just the act of sorting itself. What we aim to argue is that the kind of process of being sorted itself is a kind of strategem),
MF: Opponents of sorting tend not to concern themselves with the underlying logic of such rules but with those moments at which they become inefficient, inconvenient, slow or unjust. They may also attend to the way in which vague social classifiers such as race are mobilized to provide surety and the opportunity for randomized, unjust or unaccountable exercise of power. The likelihood of racial category providing insight into someone's level of criminality is proportionately equal to analysis on the basis of shared name. That this is so does not preclude either association being made.(For example small children of 8 or 5 having the same name as people on terror stop list flights in the US, may not be allowed to board flights because the border has been told by their database that these people are terrorists even though they are barely out of diapers.)
MF: The primary method of sorting in computing is sorting by comparison. In such cases, data is sorted in relation to other data, for instance whether it has a lesser or greater numeric value. Sorting by comparison implies that the range of data to be sorted is generally not known in advance, or does not need to be. In cases where it is known, algorithms such as bucket sort, radix sort and pigeonhole sort amongst others work with effective addressing schemas to allocate results. Differences between these can be accounted for at the level of speed, for instance when using a search engine to query for a common search term whose results are pre-ranked, compared to those that are rarer and thus need to be generated on the fly.
MF: Comparison essentially involves the allocation of a position on the basis of a greater than or less than calculation. Whilst it is tempting to make the assumption that simply because something is sorted by comparison it is reduced to a place within a schema of greater or lesser rank, this would be to over-estimate its effects and possibly to misrecognise the importance of the process of being sorted as significant in itself before a place in such an array is determined. Ranking can be an extremely useful effect in combination with a queuing system or resource allocation process as a way of entraining what is ranked. Ranking regimes that are active throughout the differential ranking of interacting entities of different scales are inherently interesting.
MF: As an example, the ranking of academics by numerous interacting rank-based mechanisms (such as those of scholars, departments, institutions, articles, citations, journals) confirm the benefit of such approaches in terms of the simplification of the evaluation of research into a quantifiable metric. The ease by which such a system can be interpreted and summarized allows for all positions within it to adapt to and canalize the required behaviour. Fine tuning of results can be achieved by more obscure means of handling such as those evinced by social networks.
MF: (We had to the see the ranking of friendships on Facebook, because this is definitely something that would occur given the generalised form of order(?) that is the main means of social interraction.)
MF: All forms of sorting require the use of resources. In resource constrained environments, choosing which sort may be adopted, in testing which sort may be being applied or to which one is subject, or whether to estimate the employment of a sort of any kind as useful, it is advisable to evaluate its implications in terms of calculation and processing. Because sorts imply such costs they are often identified as implying a deliberate sacrifice of resources, especially time, at the altar of rather obscure gods.
MF: As an important dimension of the experience of sorting this is something itself to take into account. Here, the deployment of sorts can act usefully as a form of immobilisation, an occlusion of the identification of the beneficiaries of the sorting process, or for the generation of support for new resource requirements. (So if you look at manuals of operations research for management purposes queueing systems are often used specifically to slow customers down, to slow the movement of people through a building down.)
MF: Whilst in some cases, the least amount of comparisons should be aimed at for the sake of efficiency each opportunity for ordering is one that should also be taken as a test of the worth of ordering in itself and should therefore be evaluated carefully.
MF: An example of the unpredictable results of the introduction of sorting practices is to be found in the work of postal delivery. (We think of the postal system as a form of temporary mass archive of things that only exist in the archive for a few days, we can see the relationship of the archive to the question we saw.) The postal system itself acting as a kind of temporary archive. By its very nature the work requires numerous stages of sorting. In the uk at least, the number and range of address to be delivered to is generally known and fixed into the "frame" (which is a metal frame which has a range of addresses attached to it) used to position the run of things to be delivered during their preparation in the sorting office.
MF: One exemplary factor that complicates the work is often the uneven physical distribution of the addresses. (Addresses aren't made out in a physically uniform state.) A street may be laid out in a higgledy-piggledy fashion, various plots of land having perhaps been developed at different times, being of different sizes, or arrayed non-uniformly due to natural features (such as hills, etc). Working out the optimal route to take may be further complicated by many factors (such as the slope of the ground or the presence of parcels in the delivery load).
MF: Thus, every person delivering post experiences their own daily version of the Travelling Salesman's Problem as they move from one delivery point to another. (The Travelling Salesman problem is the classic problem in computer science in which the shortest path between several different points is to be calculated.)
MF: Abstractly, this problem is usually understood to be resolvable only in exponentially calculable time, but is solved by the tacit knowledge and the labour of the postal worker who knows and sorts the route. Attempts to automate the process of sorting and route planning in ways that marginalize or contradict this local and habitual knowledge raise a number of problems that are exploitable as stratagems, but that mitigate against an optimal postal service. (So for the last 5 years the UK postal system has attempted to introduce a system called Pegasus which automates postal delivery, deskills the postal worker each time they introduce it, it fails; basically because the streets are laid out in a certain manner that - such automation, such automatic processing is not possible.)
MF: The case of the trickiness of the postal sort reminds us that the virtue of a stratagem must not be mistaken for an illusory efficiency. (Efficiency often being only a secondary result of effective operations.) It also makes evident however that certain problems of sorting can be offloaded by such means. (The problem of coping with the inefficiency gets shifted onto other (?) away from the machine) The efficient circulation of an illusion is something to be appreciated.
MF: So, in terms of how these relate to the question of the archive, how data structures, how sorting algorithms become active members / active agents in the archive systems. What we're trying to suggest is, that not only should the material form of the software be taken seriously, but is also a kind of gaming of the sytem to be taken part in. There's also a kind of manic dimension, an absurdist dimension of the archive - of the systems that we introduce to bring order to them, to make efficient questions of storage and ordering.
MF: Its also a sense in which the question of the archive operates at many different scales - the question of content, the question of logical structure, the question of semantic structure, the question of what counts as inside or outside is often handed over towards formalised systems. Many projects here are trying to avoid these kinds of issues, tending to go to lower-tech and more human interfaced systems.
MF: There are many other ways in which archives, in particular search-engines absolutely have to formalise their relationships to what they index and what they archive. And here the questions of data structures and sorting algorithms have particular kind of valence(?).
MF: And thirdly we wanted to also point out how formalisms are not simply those of category and enumueration, but they are also creative, and intersect with entities that enter into their domains in new ways and produce new forms of dissonance, new forms of generativity, new forms of problem - new forms of break-down that we're interested in working hard to explore.
MF: Okay, thanks.
MF: Okay, its pretty geeky stuff so if you did get a chance to sleep during that, its very good, otherwise if you have something ...
Q(ZC): Are we running out of time?
- No no -
Q(ZC): Do you think the way out is to make the process and method more transparent? Because in a way this is true of the individual sorter as much as the computer sorter. Do you agree?
MF: I think transparency is one of the strategems we need to be suspicious of. Its something that has become one of those words that one immediately doubts because it is said with such good intention that everyone from governments to hackers are committed to transparency. And when this is (?)... about openness, about transparency, one immediately becomes suspicious. The question of openness and transparency needs to be possibly replied with something like a questioning or a more (?) sensibility, which is also about trying to look behind to what is there included behind transparency. What is transparency a means of...
MF: For instance in the UK government there's enormous use of transparency as a way of putting information out in order to obscure other information. So there's a use of leaks, there's a use of off-the-record briefings,... there are strategems for using media, there's a whole kind of use of access and openness which is there to block actual methods(?). So I think the idea of enlightenment...
Q(ZC): Its not just listing your sources though, is it? (?)....data structures...
MF: I think that's one layer. I think honestly the more that's available to interrogate, the kind of interrogability is perhaps more useful ... so that's a good academic practice - listing your sources. But when it comes to the exercise of power one always has to be suspicious of it.
NM: There's an interesting project on transparency in the Indian context trying to analyse what it might mean in terms of egovs initiatives which someone is doing about Bombay municipality, which is available online. Maybe that would be of interest.
SB: Yeah, kind of related - this similar kind of transparency I know the UK government has- a bunch of data on kind of opening the eyes- municipal statistics, population statistics, there's data. I've looked into some visualisation,.... There is a movement towards providing open API's for government data. Whether that's the government providing it, or its a citizens' initiatives - a bunch of people who are working on this project to provide structured data for...
Q(SB): In India we have the problem where government websites tend to be really bad. There is a lot of information out there, but either it only works in Internet Explorer or something, but also the data structures are really weird in terms of just the organisation of the site - so you just provide the API that you can access. I don't know if there's a difference between the data and the data structure itself. ...If there is such a thing as raw data as opposed to data structure, if that makes any sense?
SB: Ofcourse data has to be in a structure, but just your opinion on this idea of government API's, and this idea of transparency - whether that's something you see as possibl or not?
MF: I think its interesting. I'm working with the project at the moment which is dealing with... well 2 projects one is dealing with medical records and the other dealing with municipal records from particular city council - how to make those tractable to non-numerically literate members of the population.
MF: One of the things that becomes clear is that most local government health officials are not really analysing data they are putting out themselves. They are not quite aware of what's there. And there is a problem with, for instance health - the health data. If you put out, in terms of the relationships to the other structures it has to deal with. So the problem is not so much with the data becoming publicly available, its democratic cycle in relationship to health is very sure.
MF: So if you have a government initiative that attempts to reduce (?)tax by 20%, the kind of optimal moment to intervene is when people at very young are followed through from teenage years to when they are in their 50's, 60's or 70's. How are the - because the cycle of government is only 4-5 years, the kind of initiatives that a government produces in terms of public health policy can only be enacted and in-effect in 4-5 year cycles. So that effectively cuts it down to about 3 years in which a particular policy has to be implemented- devised, implemented and shown to be effective. It actually means there's no possibility of producing health measures that are actually useful over those cycles.
MF: So what happens is that data that is produced can only correspond with the life-cycle of public health measures which then in-turn have to correspond to the life-cycle of governments. So in effect, whether the data is tractable or not, whether its transparent or not, it hardly matters given their little value.
Q(AS): There was an idea about accidents in ... I'm just wondering whether there is a ... without a sorting regime are there any special kinds of accidents that happen in it which make things leave that logic or - that you've noticed in your work with clinical records? Or if there are these (?) breaks in the system...?
MF: The terror flight stoplist is one example - they do this on basis of name. So we suggested that we could actually name their children by names that can be reproduced, can be doubled... you could have passport number, or ID number as a name is much more efficient in stopping that kind of problem. There are senses in which ... each new technology introduces some kind of argment, some kind of accident.
Q(AS): And then the nature of chance changes. I guess the question is about how you will encounter that in your ... what kinds of (?) ?
MF: One of the things we tried to talk about with the current data structure and sort algorithm, is that each sort doesn't happen- each algorithm doesn't happen in a clean space. It happens in the context of - on the software or on the operating system, the different sizes of data to be sorted, many other factors are operative at the same time to make each iteration of the software unique.
MF: Especially when you get into more process (?) - same computer at the same time - the kind of chance mode because of accidents are kind of ramified. So there's one kind of classic form. But the error is always something that's seen as humanising technology by artists for instance. It makes it more like humans, it makes it more cozy, ... if its something that kind of embraces - if we could embrace error and dysfunction as a form of aesthetic, then we could humanise the machine. What we argue instead is that rather than humanising and making it more cozy, we look at its potential to be exploited by power.
Q(RP): There seem to be both - the intentional and the unintentional state - (when you were responding to questions)...something that's obfuscatory would be using (?) obfuscatory power. But since you came out with the idea of the state (?) knowledge, I was wondering how you worked that out. In reponse to questions you were saying, you can present transparency as a way to avoid giving out all kinds of information. Do you see that as something built into (?) ?
MF: I think that's the classic model. The state obscures the truth. I think one thing we find now, particularly with the wiki leaks, as a kind of exemplary example, is that the state obscures its knowledge in order to obscure the fact that it knows nothing. It is fundamentally ignorant, it has stupidity and suspicion as its basic core mechanisms. And going to other areas there's a lot of research done on ignorance as a working method for (?).
MF: If you look at the kind of tablets and stuff my doctor gave me to come to Mumbai with ... (He is from Bombay, so I think he's out there having a joke at my expense.) But... these kinds of tablets which he said 'you may have these results, these side-effects, you have only a 100% chance of getting this, one in a thousand chance of getting this, and one in ten thousand chances of dying immediately on touching there. You know, clearly this (?), its very prbabilistic set of operations and its all kind of off-loading onto the user, which is (?) idea of risk becomes. And how knowledge and (inaudible) ...
RB: New complicities at work between users and the (?), these apparatuses of assumed knowledge, which are not knowledge, but apparatuses of ignorance. That's like a placebo where you know this is not going to help you, but you take it. In a sense, you have been complicit in what you're doctor has (?). So its a tricky thing and this whole business between the old demarcations of exploitation and procreation, they blur... because we're partaking of this ignorance, thinking its going to be good for us.
MF: Yeah, one of the reasons why we look at management theory so much, is this form in which the economy of stupidity takes place.