AUSTRIAN ACADEMIC: : Conjectures about "Y2k-effects turned out to be mostly misguided"

greenspun.com : LUSENET : Poole's Roost II : One Thread

AUSTRIAN ACADEMIC WHO DEMOLISHED GWU's so-called "ideas".

LINK

http://www.gwu.edu/~y2k/categories/doublebind.doc

Y2k-effects turned out to be mostly misguided

Y2K - “TEN MONTHS AFTER”: TWO “DOUBLE BINDS” AND A “DOUBLE BLINDNESS”



Karl H. Müller

 

Head of the Departments for Political Science and Sociology

Institute for Advanced Studies, Stumpergasse 56,
A-1060 Vienna, Austria

 

 

OVERVIEW

 

 

This paper is devoted to a detailed ex post analysis of the y2k-problem as well as to new ways of risk-assessments for contemporary knowledge societies in general. Basically, the paper tries to establish three main points. First, a large number of essential y2k-propositions which have been put forward by the author throughout the period from 1997 to 1999, can be validated by their ex post outcomes. Most notably, core characterizations of y2k as the first “global challenge”, of a new type of “knowledge society”, exhibiting new forms of “co-ordination problems” and entirely new “knowledge-based” “risk potentials”, can and should be upheld despite the almost universal marginality of y2k-induced disruptions and discontinuities. Second, two important groups of ex ante expectations on potential y2k-effects turned out to be mostly misguided. In both cases however, the reason for the misjudgment lay in a homogeneous set of “alarmist” empirical assessments with respect to the high potential for regional y2k-failures and their subsequent global “repercussions”. Third, the scientific “y2k-learning curve”, in sharp contrast to the societal “learning curves”, has followed a constant path along the “bottom line”. It will be argued that the ex ante as well as the ex post attitude on part of the scientific system was an almost unanimous lack of interest which, by itself, counts as a new risk potential, inherent in today’s “knowledge societies”.

 

 

DIMENSIONS OF THE Y2K-PROBLEM: TEN ASSESSMENTS ex ante and ex post
 

At the outset, ten basic propositions both from an ex ante and an ex post perspective can be put forward which characterize essential dimensions of y2k-problems[1] and their impact for societal development in general before and after the roll-over date. Table 1 summarizes the central assertions which incorporate, additionally, new types of societal risk assessments beyond the y2k-problem as well.

 

Table 1: Ten Basic y2k-Assessments (ex ante and ex post)

 

Validated ex ante-Assessments

 

(1)                 The y2k problem can be considered as the first major challenge of modern knowledge societies or, alternatively, of contemporary “Turing societies”.[2] The challenge has been global and has run throughout all “Turing societies” around the world. Moreover, the challenge was universal and affected industrial enterprises, the service sector, utilities and infra-structure, private households or local and state administrations. Thus, y2k should be viewed as the first universal and global coordination problem for “Turing societies”.

(2)                 The challenge posed a new type of societal coordination problem which is characteristic for “Turing societies” and which has not been encountered in previous societal formations.

(3)           The y2k-problem resulted from an erroneous “encoding” or “embedding” of time-measurements and time-coordination into the basic architecture of “Turing societies”. More specifically, y2k was the outcome from codifying time as a relatively short ”cycle” within the new machine code bases.

(4)           The y2k-failure was a self-inflicted and self-propagated ”error” in the machine code. This ”error” can be qualified as a typical ”frame problem error” in which available knowledge components with respect to the “trivial” operations of Turing programs and Turing machines have been completely “discounted”. 

(5)           The y2k-challenge belongs to the class of most complex and most densely coupled socio-technological problems. It affects the machine code bases and their embedded hardware components, i.e., chips throughout the socio-technical systems and infrastructures of contemporary Turing societies. In this sense, y2k must be considered as a rare challenge across the actor networks and the knowledge bases of contemporary “Turing societies”.

 

Non-Validated ex ante-Assessments

 

(6)           Evaluated in terms of risk potentials, y2k has been viewed by most empirical ex ante accounts as the first coordination problem of the type SPt < t(s) < RPt < t(s). This inequality states, quite generally, that the available societal substitution power (SP) for the very short (one month) or short run (one year) (t < t(s)) is smaller than the y2k-induced risk potential (RP) or, alternatively, the expected risk incidence. As it turned out, this inequality has to be completely reversed into SPt < t(s) > RPt < t(s) for all points in time prior to December 31, 1999.

(7)           The y2k-failure has been qualified as potentially "central" both to the exchanges and to the transfers of actor networks and of knowledge pools within contemporary “Turing societies”. Moreover, due to the shortage of time left, the potentially “central error” has been characterized as ” intractable”. While y2k was an “intractable” problem, the y2k-induced damages were sufficiently far away from becoming a potentially “central problem” for contemporary “Turing societies”.

 

Ex post Assessments of the Science-System:

 

(8)           The scientific system in general  has acted ex ante in a perfect “double bind”. Since y2k did not appear as a major societal “risk factor”, it would have been necessary to deal with it on a new platform ot in an inter- or transdisciplinary fashion. But there was no ex ante need for a comprehensive inter- or transdisciplinary analysis since y2k did not show up as a major “risk element” within the established disciplinary boundaries.

(9)           For the ex post period, the scientific system has followed another “double-bind”, this time with a seemingly better empirical justification. Since y2k did not appear as a major societal “risk factor” within the established “disciplinary matrices” after January 1, 2000, it would have been necessary to deal with it in an inter- or transdisciplinary. But there was no ex post need for an inter- or transdisciplinary analysis since y2k did not exhibit even minor disruptions within the “metabolism” of a globalized economy and a globalized “world society”. 

(10)         Contemporary “Turing societies” must be considered as exceptionally weak in terms of the scientific monitoring and the scientific observation of the new knowledge infrastructures. The relevant empirical ex ante data for the international or the global y2k-status have been collected by non-scientific bodies and turned out to be misleading in at least two core aspects, namely with respect to their potential impact assessments and with respect to their rankings and hierarchies. Ex post, the almost perfect “roll-over” as well as the almost perfect transition from February 29 to March 1, 2000 has effectively stopped any data production on y2k induced failures both from the scientific and from the non-scientific world.

 

Within the next pages, these ten crucial y2k-propositions will be further substantiated and justified. In doing so, a new type of societal risk analysis will gradually emerge which should be implemented within the next years in order to lift the “veil of ignorance” (John Rawls) from the actual risk profiles of the knowledge infrastructure and of the information and communication technology within today’s “Turing societies”.

 

 

Five Validated Ex ante-Assessments
 

The starting point for the first part lies in a short re-assessment of major y2k-propositions which can be upheld despite a spectacularly unspectacular “roll over” from December 31, 1999 to January 1, 2000 and from January 1 up to the present day.

At the outset, the first proposition is devoted to the scope and to the dimensions of the y2k problem.[3] Y2k has had its origins in the machine codes or, alternatively, in “Turing programs”[4]. Due to the embeddedness of Turing programs in steering and electronic control processes, the y2k challenge had been situated at the hardware level as well. Moreover, due to the high degree of diffusion of Turing programs and embedded chips across the socio-technical systems in agriculture, industry and services around the world, the y2k problem had affected the fundamental metabolic exchanges and transformations within global market networks and other global societal network formations. (Assessment 1) 

Second, y2k must still be viewed as a new type of societal coordination problem which re-combines three separated features, namely complete predictability, a necessity for effective problem-solutions and a universal and global threat or involution potential. In a morphological manner, Table 2 highlights different groups of societal coordination problems.

From Table 2, the notion of ”transferability in time” requires some additional comments. In general, a societal coordination problem is to be qualified as “time-transferable” if it can be delayed or reproduced in time without specific temporal boundary conditions or limits. Take unemployment as reference case, then a substantial unemployment reduction is one trajectory among many possible national pathways only. In principle, unemployment may persist in time indefinitely, sometimes at high levels, sometimes at lower ones, sometimes rising, sometimes falling. In this manner, consumption of heavy drugs, fatal traffic accidents, violent crimes and many other societal phenomena are to be qualified as time-transferable coordination problems since they are reproduced anew from year to year without any temporal limit imposed on their effective reduction or abolition. Consequently, y2k has belonged to the rare occurrences of non-transferable coordination problems, having an exact and insurmountable ”expiration date”, namely the time interval from 23.59 p.m. on December 31, 1999 to 0.00 a.m. on January 1, 2000. 

 

Table 2: Major Types of Societal Coordination Problems

 

                                                                                                            Predictability

Yes                                                                           No

Local                  Global                            Local                        Global

Transferable in Time               Problem I               Problem II                    Problem III            Problem IV

 

Non-Transferable

in Time                                         Problem V       Problem VI                         Problem VII           Problem VIII

 

Moreover, y2k-solutions had to be of an effective nature, too. An ”effective problem solution” is to be understood as a substantial reduction or dissolution of a specific problem. To be more concrete, an effective solution of unemployment implies a radical reduction to frictional unemployment or even a dissolution of the number of involuntarily unemployed persons. An effective solution of heavy traffic accidents lies in the radical reduction of accidents below a marginal and irreducible threshold value. In this sense, y2k required effective problem solutions for each network actor which had to be in operation prior to a non-transferable point in time.

Likewise, non-transferable societal coordination problems with a threatening global impact had been, so far, of an unpredictable nature only. Take fatal high-technology accidents, earthquakes, floods or other catastrophic events as ”paradigmatic cases”, then one recognizes immediately that in all these instances the element of non-predictability plays a significant role. An unpredictable high technology disaster like Seveso or Three Miles Island imposes a large amount of immediate and non-transferable coordination problems like rescue operations, safeguarding the social and natural environment and the like. In fact, advanced societies are equipped with a sufficiently developed protective capacities which safeguard their normal functioning in the case of minor or even medium disruptions.

Viewed in this light, y2k still must be considered as an entirely new type of coordination problem, being totally predictable, requiring effective solutions and being non-transferable at the same time. Additionally, due to its machine code base, y2k has posed a new coordination problem both for the actor networks and for the knowledge bases of contemporary “Turing societies” (Assessment 2)

Furthermore, the core of the y2k-problem has consisted of a highly revealing inversion of the traditional modes of time-encodings and time-measurements. More specifically, time has been structured or, alternatively, ”structurated” (Anthony Giddens) towards the end of the so-called “Piaget societies”[5] around circles of minutes (60 seconds), hours (60 minutes), days (24 hours), years (365 days) and a linear ordering of years, relying on a scale with a strange reference point (transition between - 1 B.C and + 1 A.C.) Despite this heterogeneous set of counting devices with their origins in Egyptian, Mesopotamian, Greek and Roman time cultures, the basic units of time measurement have been set, towards the transition from Piaget to Turing societies, in an exact and uniform manner. The definition for a second was "9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" (Barnett 1998:157), and the day has been defined "bottom up" as 86.400 atomic seconds. In this way, a sound and, above all, failure-free basis for measuring time has been established. Moreover, the introduction of radioactive clocks has enabled to determine the age of the earth itself in a sufficiently precise and consistent manner in the magnitude of 4,5 billion years (± 100 million years). In this way, a heterogeneous mix of circular-linear encodings as well as of administrative synchronizations like the global agreement on 24 time zones[6] has led to a uniform and successfully embedded "world-time" for Piaget societies. While by the end of the 1960's, time has been successfully encoded in a circular-linear fashion, the machine-code programs started to implement a relatively short two digit (99 year) and a relatively long four digit (9999 years) linear-circular version. Thus, the encoding of "real time clocks" within the Turing program bases has been undertaken both in both the long and the short version as a linear sequence of seconds/minutes/hours/days/months/years within a two digit year counter and thus a one hundred year circle (the short version with the imminent y2k-problem) or within a four digit and, thus, ten thousand year circle (the long version with a far away y10k-problem). In both cases, a circle repeats itself indefinitely into the future. In addition, the in-built temporal machine code circle was strictly ”memory-free”, having no additional ”counter” at its disposal for the number of circles. Thus, time differences within a single circle were recorded in the traditional and well-established ways of Piaget-societies, while time-differences between two circles posed all kinds of anomalies. The single second jump from 23:59:59 on December 31, 99 (circle I) to 00:00:00 on January 1, 00 (circle II) became the maximum time interval for this type of temporal encodings and the long interval between 00:00:00 on January 1, 00 (circle I) and 00:00:01 on January 1, 00 (circle II) was recorded as a single second jump only. It must be added that the y2k-paradoxes with respect to time-differences were structurally similar to the ”Goodman paradox” on induction, which had been generated via the introduction of new time-dependent predicates. [7] (Assessment 3)

With respect to the fourth assessment, the type of error underlying y2k, qualified as ”frame problem”, requires a substantial amount of justification and commenting. ”Frame problems” (see e.g., Dennett 1986, Lormand 1991) are encountered, very generally speaking, in all those instances where two ”knowledge domains” K1 and K2 are both relevant for a decision configuration D. By a cognitive integration failure, K1 is used for the actual decision procedure, neglecting K2 entirely either by "forgetting" it or by "discounting" it as irrelevant. To provide a concrete example of the "forgetful" variety, suppose someone wants to go shopping either tomorrow (A1) or the day after (A2). One thinks about the possible advantages and disadvantages associated with these two alternatives, utilizing relevant knowledge components K1. Finally a decision (D) is made to choose the latter alternative (D(A2)). For the selection of (D(A2), another knowledge component (K2) would have been highly relevant, namely the fact that there is a holiday in two days from now and that shops are closed. In the instance of arriving at (D(A2), however, (K2) did not enter into the decision configuration and was ”forgotten” and ”left out”. Quite generally, ”frame problems” arise out of an insufficient or of an erroneous integration of knowledge components. As such, y2k must still be qualified as a ”frame problem” of the "conscious" type, generated through an insufficient integration of future time horizons and, more generally, of time and time-embeddedness into the present decision configuration. During the 1960’s and 1970’s, the ”knowledge” (K2) of a four digit change in dates in 1999 was trivially available and distributed throughout the entire community of programmers, technicians, business managers and the like. Nevertheless, immediate restrictions in computer storage capacities, cost-advantages and traditional cultural practices (K1) generated such a large momentum that K2 was somehow ”left out” and was apparently considered as irrelevant for the space-times being. Moreover, y2k still reveals an astonishing insight on the time horizons of human decision procedures since even knowledge with certainty about the future like the four digit change in dates in 1995, 1996, 1997 and even in 1998 could be discounted either as irrelevant or, more to the point, treated as time-transferable. Within the concrete programming settings in the 1960’s and 1970’s, y2k was to be considered as a time-transferable ”error” or ”shortcut” whose solution, due to the triviality of the y2k-conversion as an isolated problem, was postponed for the period close the millennium change. (Assessment 4)

To continue the ex post validation of relevant ex ante y2k-propositions, one can shift to Table 3 which exhibits, following basically a taxonomy developed by Charles Perrow (1984), two basic dimensions for evaluating and for distributing socio-technical systems. According to Table 3, one is invited to distinguish between four different clusters of socio-technological ensembles, namely between linear/loose, linear/tight, complex/loose and complex/tight systems. Moreover, each of the attributes can be scaled according to different degrees so that one is confronted with a continuum ranging from minimally loose to maximally tight on the one hand and from minimally linear to maximally complex on the other hand. (Perrow 1984:97)

 



-- Anonymous, January 07, 2001

Answers

re-format:

Y2K - “TEN MONTHS AFTER”: TWO “DOUBLE BINDS” AND A “DOUBLE BLINDNESS”

 

 

 

Karl H. Müller

 

Head of the Departments for Political Science and Sociology

Institute for Advanced Studies, Stumpergasse 56,

A-1060 Vienna, Austria

 

 

 

OVERVIEW

 

 

This paper is devoted to a detailed ex post analysis of the y2k-problem as well as to new ways of risk-assessments for contemporary knowledge societies in general. Basically, the paper tries to establish three main points. First, a large number of essential y2k-propositions which have been put forward by the author throughout the period from 1997 to 1999, can be validated by their ex post outcomes. Most notably, core characterizations of y2k as the first “global challenge”, of a new type of “knowledge society”, exhibiting new forms of “co-ordination problems” and entirely new “knowledge-based” “risk potentials”, can and should be upheld despite the almost universal marginality of y2k-induced disruptions and discontinuities. Second, two important groups of ex ante expectations on potential y2k-effects turned out to be mostly misguided. In both cases however, the reason for the misjudgment lay in a homogeneous set of “alarmist” empirical assessments with respect to the high potential for regional y2k-failures and their subsequent global “repercussions”. Third, the scientific “y2k-learning curve”, in sharp contrast to the societal “learning curves”, has followed a constant path along the “bottom line”. It will be argued that the ex ante as well as the ex post attitude on part of the scientific system was an almost unanimous lack of interest which, by itself, counts as a new risk potential, inherent in today’s “knowledge societies”.

 

 

DIMENSIONS OF THE Y2K-PROBLEM: TEN ASSESSMENTS ex ante and ex post

 

At the outset, ten basic propositions both from an ex ante and an ex post perspective can be put forward which characterize essential dimensions of y2k-problems[1] and their impact for societal development in general before and after the roll-over date. Table 1 summarizes the central assertions which incorporate, additionally, new types of societal risk assessments beyond the y2k-problem as well.

 

Table 1: Ten Basic y2k-Assessments (ex ante and ex post)

 

Validated ex ante-Assessments

 

(1)          &n bsp;      The y2k problem can be considered as the first major challenge of modern knowledge societies or, alternatively, of contemporary “Turing societies”.[2] The challenge has been global and has run throughout all “Turing societies” around the world. Moreover, the challenge was universal and affected industrial enterprises, the service sector, utilities and infra-structure, private households or local and state administrations. Thus, y2k should be viewed as the first universal and global coordination problem for “Turing societies”.

(2)          &n bsp;      The challenge posed a new type of societal coordination problem which is characteristic for “Turing societies” and which has not been encountered in previous societal formations.

(3)           The y2k-problem resulted from an erroneous “encoding” or “embedding” of time-measurements and time-coordination into the basic architecture of “Turing societies”. More specifically, y2k was the outcome from codifying time as a relatively short ”cycle” within the new machine code bases.

(4)           The y2k-failure was a self-inflicted and self- propagated ”error” in the machine code. This ”error” can be qualified as a typical ”frame problem error” in which available knowledge components with respect to the “trivial” operations of Turing programs and Turing machines have been completely “discounted”.

(5)           The y2k-challenge belongs to the class of most complex and most densely coupled socio-technological problems. It affects the machine code bases and their embedded hardware components, i.e., chips throughout the socio-technical systems and infrastructures of contemporary Turing societies. In this sense, y2k must be considered as a rare challenge across the actor networks and the knowledge bases of contemporary “Turing societies”.

 

Non-Validated ex ante-Assessments

 

(6)           Evaluated in terms of risk potentials, y2k has been viewed by most empirical ex ante accounts as the first coordination problem of the type SPt < t(s) < RPt < t(s). This inequality states, quite generally, that the available societal substitution power (SP) for the very short (one month) or short run (one year) (t < t(s)) is smaller than the y2k-induced risk potential (RP) or, alternatively, the expected risk incidence. As it turned out, this inequality has to be completely reversed into SPt < t(s) > RPt < t(s) for all points in time prior to December 31, 1999.

(7)           The y2k-failure has been qualified as potentially "central" both to the exchanges and to the transfers of actor networks and of knowledge pools within contemporary “Turing societies”. Moreover, due to the shortage of time left, the potentially “central error” has been characterized as ” intractable”. While y2k was an “intractable” problem, the y2k-induced damages were sufficiently far away from becoming a potentially “central problem” for contemporary “Turing societies”.

 

Ex post Assessments of the Science-System:

 

(8)           The scientific system in general  has acted ex ante in a perfect “double bind”. Since y2k did not appear as a major societal “risk factor”, it would have been necessary to deal with it on a new platform ot in an inter- or transdisciplinary fashion. But there was no ex ante need for a comprehensive inter- or transdisciplinary analysis since y2k did not show up as a major “risk element” within the established disciplinary boundaries.

(9)           For the ex post period, the scientific system has followed another “double-bind”, this time with a seemingly better empirical justification. Since y2k did not appear as a major societal “risk factor” within the established “disciplinary matrices” after January 1, 2000, it would have been necessary to deal with it in an inter- or transdisciplinary. But there was no ex post need for an inter- or transdisciplinary analysis since y2k did not exhibit even minor disruptions within the “metabolism” of a globalized economy and a globalized “world society”.

(10)         Contemporary “Turing societies” must be considered as exceptionally weak in terms of the scientific monitoring and the scientific observation of the new knowledge infrastructures. The relevant empirical ex ante data for the international or the global y2k-status have been collected by non- scientific bodies and turned out to be misleading in at least two core aspects, namely with respect to their potential impact assessments and with respect to their rankings and hierarchies. Ex post, the almost perfect “roll-over” as well as the almost perfect transition from February 29 to March 1, 2000 has effectively stopped any data production on y2k induced failures both from the scientific and from the non-scientific world.

 

Within the next pages, these ten crucial y2k-propositions will be further substantiated and justified. In doing so, a new type of societal risk analysis will gradually emerge which should be implemented within the next years in order to lift the “veil of ignorance” (John Rawls) from the actual risk profiles of the knowledge infrastructure and of the information and communication technology within today’s “Turing societies”.

 

 

Five Validated Ex ante-Assessments

 

The starting point for the first part lies in a short re-assessment of major y2k-propositions which can be upheld despite a spectacularly unspectacular “roll over” from December 31, 1999 to January 1, 2000 and from January 1 up to the present day.

At the outset, the first proposition is devoted to the scope and to the dimensions of the y2k problem.[3] Y2k has had its origins in the machine codes or, alternatively, in “Turing programs”[4]. Due to the embeddedness of Turing programs in steering and electronic control processes, the y2k challenge had been situated at the hardware level as well. Moreover, due to the high degree of diffusion of Turing programs and embedded chips across the socio-technical systems in agriculture, industry and services around the world, the y2k problem had affected the fundamental metabolic exchanges and transformations within global market networks and other global societal network formations. (Assessment 1)

Second, y2k must still be viewed as a new type of societal coordination problem which re-combines three separated features, namely complete predictability, a necessity for effective problem-solutions and a universal and global threat or involution potential. In a morphological manner, Table 2 highlights different groups of societal coordination problems.

From Table 2, the notion of ”transferability in time” requires some additional comments. In general, a societal coordination problem is to be qualified as “time-transferable” if it can be delayed or reproduced in time without specific temporal boundary conditions or limits. Take unemployment as reference case, then a substantial unemployment reduction is one trajectory among many possible national pathways only. In principle, unemployment may persist in time indefinitely, sometimes at high levels, sometimes at lower ones, sometimes rising, sometimes falling. In this manner, consumption of heavy drugs, fatal traffic accidents, violent crimes and many other societal phenomena are to be qualified as time-transferable coordination problems since they are reproduced anew from year to year without any temporal limit imposed on their effective reduction or abolition. Consequently, y2k has belonged to the rare occurrences of non-transferable coordination problems, having an exact and insurmountable ”expiration date”, namely the time interval from 23.59 p.m. on December 31, 1999 to 0.00 a.m. on January 1, 2000.

 

Table 2: Major Types of Societal Coordination Problems

 

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;   Predictability

Yes                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;     No

Local           & nbsp;      Global           & nbsp;            ;     Local           & nbsp;            ; Global

Transferable in Time           & nbsp;   Problem I           & nbsp;   Problem II           & nbsp;        Problem III            Problem IV

 

Non- Transferable

in Time           & nbsp;            ;           &nb sp;      Problem V       Problem VI           & nbsp;            ;  Problem VII           Problem VIII

 

Moreover, y2k-solutions had to be of an effective nature, too. An ”effective problem solution” is to be understood as a substantial reduction or dissolution of a specific problem. To be more concrete, an effective solution of unemployment implies a radical reduction to frictional unemployment or even a dissolution of the number of involuntarily unemployed persons. An effective solution of heavy traffic accidents lies in the radical reduction of accidents below a marginal and irreducible threshold value. In this sense, y2k required effective problem solutions for each network actor which had to be in operation prior to a non-transferable point in time.

Likewise, non-transferable societal coordination problems with a threatening global impact had been, so far, of an unpredictable nature only. Take fatal high- technology accidents, earthquakes, floods or other catastrophic events as ”paradigmatic cases”, then one recognizes immediately that in all these instances the element of non-predictability plays a significant role. An unpredictable high technology disaster like Seveso or Three Miles Island imposes a large amount of immediate and non-transferable coordination problems like rescue operations, safeguarding the social and natural environment and the like. In fact, advanced societies are equipped with a sufficiently developed protective capacities which safeguard their normal functioning in the case of minor or even medium disruptions.

Viewed in this light, y2k still must be considered as an entirely new type of coordination problem, being totally predictable, requiring effective solutions and being non-transferable at the same time. Additionally, due to its machine code base, y2k has posed a new coordination problem both for the actor networks and for the knowledge bases of contemporary “Turing societies” (Assessment 2)

Furthermore, the core of the y2k-problem has consisted of a highly revealing inversion of the traditional modes of time-encodings and time-measurements. More specifically, time has been structured or, alternatively, ”structurated” (Anthony Giddens) towards the end of the so-called “Piaget societies”[5] around circles of minutes (60 seconds), hours (60 minutes), days (24 hours), years (365 days) and a linear ordering of years, relying on a scale with a strange reference point (transition between - 1 B.C and + 1 A.C.) Despite this heterogeneous set of counting devices with their origins in Egyptian, Mesopotamian, Greek and Roman time cultures, the basic units of time measurement have been set, towards the transition from Piaget to Turing societies, in an exact and uniform manner. The definition for a second was "9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" (Barnett 1998:157), and the day has been defined "bottom up" as 86.400 atomic seconds. In this way, a sound and, above all, failure-free basis for measuring time has been established. Moreover, the introduction of radioactive clocks has enabled to determine the age of the earth itself in a sufficiently precise and consistent manner in the magnitude of 4,5 billion years (± 100 million years). In this way, a heterogeneous mix of circular- linear encodings as well as of administrative synchronizations like the global agreement on 24 time zones[6] has led to a uniform and successfully embedded "world-time" for Piaget societies. While by the end of the 1960's, time has been successfully encoded in a circular-linear fashion, the machine-code programs started to implement a relatively short two digit (99 year) and a relatively long four digit (9999 years) linear- circular version. Thus, the encoding of "real time clocks" within the Turing program bases has been undertaken both in both the long and the short version as a linear sequence of seconds/minutes/hours/days/months/years within a two digit year counter and thus a one hundred year circle (the short version with the imminent y2k-problem) or within a four digit and, thus, ten thousand year circle (the long version with a far away y10k-problem). In both cases, a circle repeats itself indefinitely into the future. In addition, the in-built temporal machine code circle was strictly ”memory-free”, having no additional ”counter” at its disposal for the number of circles. Thus, time differences within a single circle were recorded in the traditional and well-established ways of Piaget-societies, while time-differences between two circles posed all kinds of anomalies. The single second jump from 23:59:59 on December 31, 99 (circle I) to 00:00:00 on January 1, 00 (circle II) became the maximum time interval for this type of temporal encodings and the long interval between 00:00:00 on January 1, 00 (circle I) and 00:00:01 on January 1, 00 (circle II) was recorded as a single second jump only. It must be added that the y2k-paradoxes with respect to time-differences were structurally similar to the ”Goodman paradox” on induction, which had been generated via the introduction of new time- dependent predicates. [7] (Assessment 3)

With respect to the fourth assessment, the type of error underlying y2k, qualified as ”frame problem”, requires a substantial amount of justification and commenting. ”Frame problems” (see e.g., Dennett 1986, Lormand 1991) are encountered, very generally speaking, in all those instances where two ”knowledge domains” K1 and K2 are both relevant for a decision configuration D. By a cognitive integration failure, K1 is used for the actual decision procedure, neglecting K2 entirely either by "forgetting" it or by "discounting" it as irrelevant. To provide a concrete example of the "forgetful" variety, suppose someone wants to go shopping either tomorrow (A1) or the day after (A2). One thinks about the possible advantages and disadvantages associated with these two alternatives, utilizing relevant knowledge components K1. Finally a decision (D) is made to choose the latter alternative (D(A2)). For the selection of (D(A2), another knowledge component (K2) would have been highly relevant, namely the fact that there is a holiday in two days from now and that shops are closed. In the instance of arriving at (D(A2), however, (K2) did not enter into the decision configuration and was ”forgotten” and ”left out”. Quite generally, ”frame problems” arise out of an insufficient or of an erroneous integration of knowledge components. As such, y2k must still be qualified as a ”frame problem” of the "conscious" type, generated through an insufficient integration of future time horizons and, more generally, of time and time-embeddedness into the present decision configuration. During the 1960’s and 1970’s, the ”knowledge” (K2) of a four digit change in dates in 1999 was trivially available and distributed throughout the entire community of programmers, technicians, business managers and the like. Nevertheless, immediate restrictions in computer storage capacities, cost- advantages and traditional cultural practices (K1) generated such a large momentum that K2 was somehow ”left out” and was apparently considered as irrelevant for the space-times being. Moreover, y2k still reveals an astonishing insight on the time horizons of human decision procedures since even knowledge with certainty about the future like the four digit change in dates in 1995, 1996, 1997 and even in 1998 could be discounted either as irrelevant or, more to the point, treated as time-transferable. Within the concrete programming settings in the 1960’s and 1970’s, y2k was to be considered as a time-transferable ”error” or ”shortcut” whose solution, due to the triviality of the y2k-conversion as an isolated problem, was postponed for the period close the millennium change. (Assessment 4)

To continue the ex post validation of relevant ex ante y2k-propositions, one can shift to Table 3 which exhibits, following basically a taxonomy developed by Charles Perrow (1984), two basic dimensions for evaluating and for distributing socio-technical systems. According to Table 3, one is invited to distinguish between four different clusters of socio-technological ensembles, namely between linear/loose, linear/tight, complex/loose and complex/tight systems. Moreover, each of the attributes can be scaled according to different degrees so that one is confronted with a continuum ranging from minimally loose to maximally tight on the one hand and



-- Anonymous, January 07, 2001

To continue the ex post validation of relevant ex ante y2k-propositions, one can shift to Table 3 which exhibits, following basically a taxonomy developed by Charles Perrow (1984), two basic dimensions for evaluating and for distributing socio-technical systems. According to Table 3, one is invited to distinguish between four different clusters of socio-technological ensembles, namely between linear/loose, linear/tight, complex/loose and complex/tight systems. Moreover, each of the attributes can be scaled according to different degrees so that one is confronted with a continuum ranging from minimally loose to maximally tight on the one hand and from minimally linear to maximally complex on the other hand. (Perrow 1984:97)

 

Table 3: Two Dimensions for Socio-Technological Systems

 

Vertical Dimension

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                 & nbsp;      

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p; 

Tight           & nbsp;            ;            Dams           & nbsp;            ;           &nb sp;            Nuclear Po-           y2k-    

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                wer Plant           & nbsp;   Area

 

 

 

 

 

 

Coupling

 

 

 

 

 

           & nbsp;            ;           &nb sp;           & nbsp;    Most Manufacturing           & nbsp;            ;           &nb sp;   

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                Multi-Goal Agencies

           & nbsp;       Loose                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                     Universities,

 

           & nbsp;            ;           &nb sp;           & nbsp;    Linear           & nbsp;         Complexity           & nbsp;            ;        Complex

           & nbsp;            ;           &nb sp;           & nbsp;          Horizontal Dimension (Number of Components)

 

Using these two dimensions of coupling and complexity, the y2k problem still must be qualified as the most complex and most tightly coupled technology problem for a very simple reason. On the one hand, y2k affected all possible combinations of technological coupling and complexity and extended over the whole range of very complex and tight ensembles like nuclear plants or nuclear weapons, of linear and tight socio-technological configurations like dams or continuous processing, of loose and complex units like R&D firms or multi-goal government agencies and, finally, of loose and linear assemblies like most manufacturing. (Perrow 1984:97) In this sense, y2k has been “distributed” throughout the entire field of Table 3. On the other hand, y2k had a direct impact on the connections between these four possible socio-technological configurations as well. In this sense, y2k belongs to the special or "singular" class of most complex and most tightly or densely linked socio-technological problems. (Assessment 5)

 

 

Two Non-Validated Groups of ex ante Assessments

 

Within the present section, two groups of ex ante assessments will be presented which, due to the events after the roll-over date, have become obsolete. It should be added though, that these two expectations stood at the core of potential “y2k-effects” and must be considered, thus, of vital importance. Nevertheless, both expected effects did not produce any tangible traces after January 1, 2000.

The first group of misleading or false ex ante propositions is linked with the scope and the dimensions of the y2k-risk potentials within contemporary “Turing-societies”. Following the available empirical assessments, it was relatively straightforward to demonstrate that problems of the y2k- type should assume a large number of new and inverted relations between societal risk incidences, societal risk potentials and societal substitution powers. Following the second y2k-proposition from Table 1 as well as some plausible definitions for risk incidence (RI) and substitution power (SP), one could “demonstrate” that the substitution, and thus, the coordination efforts must be operating under the following inequalities.

Traditionally, one could postulate the following inequalities for disasters in socio-technological systems (breakdowns of nuclear power plants, explosions in chemical plants, refineries, etc.), for “natural catastrophes” (earthquakes, floods, storms, etc.) and for socio-ecological disruptions (famine, epidemics, etc.).[8]

 

Table 4: Basic Inequalities for "Normal Accidents" in Piaget-Societies in the 19th and 20th Century

 

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                    Temporal Dimension

Very Short term           & nbsp;  Short term           & nbsp;      Medium Term

           & nbsp;            ;           &nb sp;           & nbsp;    (days/weeks/Month)           ( < one Year)           & nbsp;     (< Three Years)

 

Local           & nbsp;            ;           &nb sp;        RI > SP           & nbsp;            ;           &nb sp;   RI < SP or           & nbsp;        RI < SP or

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;        RI > SP                 & nbsp;            ; RI > SP (seldom)

Global

in Time           & nbsp;            ;            RI < SP           & nbsp;            ;           &nb sp;   RI < SP           & nbsp;            ;        RI < SP

 

From Table 4, one obtains basically the same result which can be found in the previous Table 3 on different types of coordination problems already. Local catastrophes or disasters like large earthquakes, floods and the like have had a considerable local impact for (very) short, short or, although seldom, for long-term periods, but they did not exert any significant global effect on the overall performance of actor networks world-wide. In this sense, Piaget-societies which in the course over the last five hundred years had been undergoing a self-organized process towards "globalization", were exposed, under conditions of "normal accidents", to local, regional, national or restricted international crises only.[9] Towards the end of the “Piaget societies”, one finds an obvious exception to the general inequalities in Table 4, namely the military build up and the military destruction potential which in the case of World War II and the new generation of nuclear weapons after 1945 have effectively transcended the inequalities of Table 4.

The traditional picture of regionally contained risks changed substantially, according to the ex ante y2k-proposition, for “normal accidents” within contemporary “Turing societies”. With reference to y2k, the basic relations and inequalities in contemporary Turing societies were assumed to be of a different format.



-- Anonymous, January 07, 2001

 

Table 5: Basic Inequalities for y2k-Accidents" in Turing Societies for the Period from 2000 to 2003

 

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;  Temporal Dimension

Very Short term           & nbsp;  Short term           & nbsp;      Medium Term

           & nbsp;            ;           &nb sp;           & nbsp;    (days/weeks/Month)           ( < one Year)           (< Three Years)

 

Local           & nbsp;            ;           &nb sp;        RI > SPW or           & nbsp;            ;      RI < SPW or           & nbsp;    RI < SPW or

           & nbsp;            ;           &nb sp;           & nbsp;    RI < SPW           & nbsp;            ;          RI > SPW           & nbsp;         RI > SPW

Global

in Time           & nbsp;            ;            RI > SPW           & nbsp;            ;          RI > SPW            & nbsp; RI < SPW or

                       &nbs p;           &n bsp;                       &nbs p;           &n bsp;                       &nbs p;                   & nbsp;            ;           &nb sp;   RI > SPW

 

The first obvious inversion between Table 5 and Table 4 lay in the new relations between local and global. While y2k might assume different patterns at the local levels, it should be qualified, by necessity, as a global challenge and a global crisis. This part of the ex ante y2k-assesment can still be upheld “seven months after”. But the other consequences, resulting from Table 5, were far off from the ex post chains of events. After January 1, 2000, the occurring risk-incidences never exceeded the available global substitution powers. According to global estimates partly from government agencies (Jaqueline Williams-Bridgers for the US State Department 1999) or from international consultants (Louis Maroccio 1999 (Gartner Group)),  there should have been a substantial amount of very short-term risk incidences exactly in those areas for which only low or minimal substitution processes could take place, namely in the domain of non market networks for infrastructure and in the domain of government networks.

Moreover, the most "risky" inequality in Table 5 was the second relation in the global/short term field for the entire period of the year 2000 where the risk incidence had been assumed to be higher than the actual global substitution power. The main reasons for postulating this inequality were partly empirical, partly theoretical. From an empirical point of view, a US-State Department report had identified 88 countries with a medium or high risk potential in one of five vital infrastructure areas (energy, finance, transport, water, telecommunication).[10] The two theoretical reasons given for the global short term inequality in Table 5 are still valid ones but they depended crucially on a sufficiently adequate empirical assessment. On the one hand, the distribution of failures was assumed not to be uniform over time, but centered around a clearly recognizable peak in the vicinity of the "rollover date". This, in turn, implied that there had to be, by necessity, a peak period for which the inequality would have reached its first clearly recognizable negative “maximum”. From general network theory, one could infer, additionally, that for configurations of this type "downward oscillations" should become the most likely trajectory of the overall network performance. On the other hand, a massive failure peak should bring with it negative or "risky" secondary, tertiary, quartary, ..., n-ary effects.[11] Thus, a failure in energy transmission between a utility company and a firm will have a negative secondary effect if and only if the output relations of the firm with other firms or with private customers will be hampered. Likewise, tertiary effects could be defined in a similarly recursive manner so that it was relatively clearly recognizable that y2k-problems should have sent a large number of n-ary "shocks" throughout the global market networks for goods, services and infrastructure, the  non market networks for infrastructure and, finally, to government services. All three reasons combined offered a basic justification for the global short term-inequality RI > SPW. But, due to the highly misleading ex ante data base, the market networks or the infrastructure nets both nationally and globally never experienced any types of secondary, let alone tertiary or n-ary consequences.

Moreover, the effects of y2k-induced damages were described as a "global lottery" with a large amount of strange features. First, the participants in the global lottery were assumed to be regions.[12] Additionally, the lottery was assumed to be operative for a 12 month period, starting on January 1, 2000 and ending on December 31, 2000. Moreover, the lottery had a peculiar distribution of gains and losses, with a very small number of "lottery gains" and a large amount of "lottery losses". The basic justification for this distribution lay, once again, in the short-term global inequality in Table 5 which implied, inter alia, that the overall global network performance for goods and services for the year 2000 should fall well below the expected or predicted growth values. Once again, well defined "global lotteries" had not been encountered throughout the entire period of Piaget societies and had, thus, to be qualified as a typical new feature of contemporary Turing societies, their new risk incidences and their new risk potentials. Finally, the strangest feature of the "global lottery" was assumed to lie in the fact that the outcomes of the y2k-lottery should be linked only indirectly to the degree and to the amount of substitution efforts prior to January 1, 2000. While one could establish, on a priori grounds, a significantly positive correlation between the degree of substitution efforts and the subsequent y2k-induced damages, the correlations were assumed to be far from perfect or even far from highly significant. Once again, it was believed that there would be a substantial number of regions with the combination "high substitution effort/low performance" and "low substitution effort/high performance". Without going into the detailed and still valid justifications for this point, the effective outcomes of the "global lottery" turned out to be significantly different to the ex ante propositions. The first “global lottery” had a unique distribution with no single looser and, thus, does not qualify as a “lottery” in any meaningful sense of the word. The interesting element though was the peculiar fact that the “all win” configuration had been reached largely independent of the amount of y2k-preparations and y2k-adaptions. In this residual sense, terms like "chaotic expectations" or “global time- quake” with very low values on the damage scale though, still offer an interesting and “deep” summary of the global y2k-conversion processes. (Assessment 6)

The second group of non-valid ex ante y2k-propositions was centered on a relatively new complex framework which could and can be applied both to the actor-networks and to the knowledge pools of contemporary Turing societies. This new multi-component framework for the ”Great Transformations” within modern societies[13] was characterized by two main attributes, namely by metabolic transformations and by maintenance/repair processes. The basic results lay in a theorem which remains, quite obviously, valid irrespective of the actual y2k-outcomes. This theorem states very generally, that all types of metabolic-repair networks (MR-networks) are faced with two different types of “network failures”. On the one hand, MR-networks can be distributed in a largely independent manner which implies that the substitution capacity for small network failures is low and that the probability for a complete network collapse is close to zero. On the other hand, highly connected MR-networks exhibit high degrees of substitution capacities and, thus, a low probability for regional network failures. But densely coupled MR-networks show a peculiar feature, namely a so-called “central component” which, due to its non-re-establishable character, has the potential of “crippling” the entire network. Thus, the notion of a potentially ”central error”, having become ”intractable” in the course of the second half of the 1990’s, had been established as an additional ex ante proposition.

Seven months after the roll-over, it is quite obvious that the danger of a y2k-induced “central error” has been practically zero for any point in time prior to January 1, 2000. Both the linkage structure within the global MR-network and the risk-potential for major infra-structural network components did not even come close to critical “threshold values” relevant for a “central error” to occur. The actor network formations as well as the knowledge pools of contemporary Turing societies, both conceptualized as metabolism-repair configurations, were never too densely interwoven in a non-robust manner. (Assessment 7)

 

 

Three ex post Assessments

 

The final section of this paper will focus mainly on the science system in general which has acted and reacted towards the y2k-problem in an almost perfect “lock in-mode”, being stuck within two very similar "double bind-configurations". It will become the main task in the final assessment part to stress three major “science failures” with respect to the y2k-problems. First, the scientific inactivity prior to the roll-over date, while understandable in terms of a “double bind-configuration,” had no cognitive or rational justifications and must be qualified as  both an irrational and an irresponsible “high risk-strategy”. Second, the scientific inactivity after the roll-over, while at first sight a seemingly rational strategy, can be characterized as questionable, to say the least. Finally, the scientific system as a whole has failed to establish sufficiently reliable mechanisms for observing and monitoring the core “knowledge based-processes” and their ICT- infrastructures. Thus, the science system was and still is confronted with a “double blindness” with respect to a data-based assessment of the y2k-risk potentials within contemporary “Turing societies” both ex ante and ex post.

Before entering into a detailed discussion of the final three assessments, the notion of a “double bind” has to be laid out with sufficient clarity. In logical terms, a “double bind-configuration” is present whenever two conditions are fulfilled simultaneously. Starting from an initial configuration I and from a specific action or action-sequence A, one is led to a special domain D, which must be implicitly or explicitly “accepted” in order to be able to follow A (Condition 1). By “selecting” D however, a new decision has to be made which, by necessity, leads back to the initial configuration I. Aside from the well- known “catch 22” episode, typical “double bind-situations” consist in “self- destroying” imperatives like “be spontaneous”, “act autonomously”, “laugh freely”, “withstand orders”, etc. Here, the acceptance of “following an order” leads immediately back to the initial configuration, since “following an order” is incompatible with “spontaneous” or “autonomous” actions, with “free laughing”, with “withstanding orders”, etc.[14]

With respect to y2k, both the ex ante and the ex post double bind configuration assume the same underlying “deep structure”. The initial situation I was characterized by a “big sleep” and by a pre-dominant inactivity within the different segments of the science system. Within the established disciplinary boundaries, y2k did not constitute a challenging scientific problem, a vital threat to the science system in general or a major societal risk factor. Thus, it would have been necessary to deal with the scientific side of the y2k-problem by establishing new inter- or transdisciplinary platforms (D), suited for the purpose of y2k-investigations. But any such initiative was bound to fail since y2k did not constitute, within the boundaries of available “disciplinary matrices”, a challenging scientific problem, a vital threat to the science system in general or a major societal risk factor.

In this manner, a stable pre-roll-over "y2k-double bind" emerged for the science system as a whole. At this point, it seems worthwhile to point out, first, that the novelty of the y2k challenges “transcended” the existing disciplinary boundaries completely. Viewed in terms of core-competencies, no scientific discipline was equipped to deal with the complex ramifications of y2k issues. For the domain of information or computer sciences, y2k was a highly trivial problem to be solved by the end-users of computer hardware and software. For “science studies”, y2k was an ill-defined problem at best since it dealt mainly with programs, codes and program-errors. For technology assessments, it was uncommon to investigate the societal fate of trivial program errors. For economists, the neoclassical background offered no suitable link between model constructions and program errors in time measurement. Social scientists in general were very slow to adapt to the new features of contemporary “knowledge societies” in general and to the machine code domains in particular. A complete enumeration by disciplines would reveal very clearly that problems of the y2k-type, due to their novelty and due to their “all-inclusive” character, were not part of the established disciplinary core competencies.



-- Anonymous, January 07, 2001

competencies.

Moreover, the available data on the national as well as on the global scene were highly alarming and did not lend themselves to scientific inactivity. Surprisingly, the available international and global data published during 1998 and 1999 showed almost unanimously a bleak picture. The available data differentiated very clearly between different nation groups and between different national, regional or sectoral degrees of potential y2k-damages. Except for an “optimistic” global financial data base, most empirical assessments gave the impression that the on-going y2k-adaptation work was too small, too fragmented and too late for a secure roll-over process. Nevertheless, the “sounds of scientific silence” remained the only tune during the entire pre-roll over-period.

Probably the most important cognitive reason for the ex ante “double bind” lay in the apparent triviality of the y2k-conversion problem. More concretely, the y2k problem revealed a fascinating ”mimicry” which was partly responsible for the slow societal reaction and largely accountable for the non-reaction of the scientific system. As an isolated problem of program conversion, y2k must be qualified as highly trivial and as effectively solvable. Given a well- defined small program, using two digit year codes, it was a matter of utmost simplicity to transform the program into a four digit version. Even for large programs, y2k modification meant a tedious and year-long search for two digit time codes. Nevertheless, the success of this search process could be effectively tested and the search and substitution processes by themselves do not constitute an exciting scientific issue at all. In this sense, the y2k-problem appeared, at first sight, as a minimally linear and minimally loose technology issue, clearly situated in the lower left domain (loosely coupled/linear) of Table 3. But the first assessment part has clearly shown that beyond the y2k-mimicry y2k-problems were not confined to isolated conversion routines but had become a highly embedded and widely distributed societal failure of time-co- ordination. Two digit codes had been used, according to proposition one, in a vast number of embedded chips for electronic control and steering. Likewise, y2k conversion problems appeared, quite naturally, at the level of the machine codes and, thus, of the program level as well. Consequently, y2k posed the rare occasion of a dual-level technology problem, distributed both across actor-networks and across the knowledge bases.[15] But due to the “y2k-mimicry”, there was, at least at first sight, no sufficient reason to transform y2k-problems into a new "disciplinary matrix" or to investigate y2k in an inter- or transdisciplinary fashion. In a slight variation to “Occam’s razor”, trivial problems should not be multiplied within the scientific domain beyond necessity. And y2k seemed to be a clear instance far beyond the necessities for problem transformations. (Assessment 8)

The "ex post-double bind" turned out to be of a similar nature. Y2k did not constitute a major scientific problem, since it did not appear as a major "societal risk factor" within the established "disciplinary matrices". Moreover, due to the triviality of the y2k-conversion-problems after January 1, 2000, it was not worthwhile to transform y2k into a new "disciplinary matrix". Given the five valid ex ante assessments on the “nature” and on the “scope” of y2k-problems, the ex post double-bind is impossible to sustain since y2k constituted a 400 to 500 billion dollar challenge of the new knowledge and information societies or “Turing societies” worldwide. Moreover, given the financial resources, the communication processes at regional, national and global levels, y2k emerged as a huge and unprecedented “learning experiment” for today’s Turing societies. Finally, y2k became a paradigmatic example for the new power and the new potential for self-organizing processes from the community levels up to the global level. Millions of y2k-links, y2k-websites, chats and other new forms of communication have been produced which helped to adapt to the y2k-challenges. Thus, the ex post double-bind configuration qualifies, once again, as a major “science failure” and the scientific ex post inactivity is by itself ill-founded. A thorough ex post analysis on the basic dynamics and on the fundamental reasons for the global “success story” in societal self- organization would be a far more rational path to follow for events of the y2k- magnitude. Nevertheless, the science system in general continues to keep its y2k- spirits remarkably low. (Assessment 9)

Finally, the term "double blindness" refers to the peculiar situation that the data generation processes ex ante and ex post have created and still are producing almost totally "blind dates" or “purely white noise”. The empirical knowledge on the global status of y2k has been, on a familiar scale from 0 (total ignorance) to 10 (perfect information), close to the zero-region throughout the pre-roll-over and the post-roll-over period. While it must be stated very explicitly that the available published data and empirical guesses on part of non-scientific bodies were, in retrospect, highly misleading, the science system in general did not produce any reliable data or data guesses at all. It acted before and after the roll-over in “total blindness” and “ignorance”. More generally, the term “double blindness” should be viewed as an accurate description of the “permanent ignorance” of the science system vis a vis ongoing core processes within the knowledge bases of today’s “Turing societies” and vis a vis the risk potentials inherent in them.(Assessment 10)

In this manner, the y2k-evaluation “seven months after” has come to its necessary final point. By now, it should be obvious that y2k offers a tremendous learning potential for the basic dynamics of today’s “Turing societies” and especially for their rapid self-organizing capacities. Moreover, the potential gains from an “upward” scientific “y2k-learning curve” should far outweigh the barriers for overcoming both the existing “double binds” and the prevalent “double blindness”.

 



-- Anonymous, January 07, 2001

Moderation questions? read the FAQ