Special Characteristics
of Computers
The computer itself
is merely an array of electronic circuits that store data in the form of
switches that are either on or off and that change their status as directed
by a program or user.
As a user I
put in data. The computer translates the characters I type (or the words I
speak) into patterns of switches either on or off. I direct the computer to
perform a function on the data, and the computer responds by changing its
switches. I receive the result on the screen, a paper print-out, or a
computer disk, and that's all there is to it.
What is
remarkable about a computer is the fantastic speed with which it can
manipulate vast amounts of data and diverse types of information--especially
when we consider that all that really happens is that switches turn on and
off in changing sequences.
If we want to
store a number the computer must translate it into a pattern of ons and offs
in a particular set of switches. Once upon a time the operator typed in very
detailed instructions that listed the names of the switches and told them
what to do in a highly arcane code. Today, though codes like this are still
the language that ultimately instructs the computer, operators use familiar
words and phrases which the computer translates step by step into its own
"machine language."
In any case,
if we want to add two numbers that we have stored, we must provide the
computer with instructions that, when turned on by a user, will combine the
two sets of switches representing the two numbers to create a new pattern of
ons and offs. The new pattern represents the sum of the numbers. While we can
do this just by saying, "Add 17 and 483," what the computer does in
response is even more tedious than it appears.
For example,
imagine the endless sequences of switches involved in solving a difficult
equation predicting stress lines for a bridge design or the flight trajectory
of a missile. We are talking about millions upon millions of minuscule
electronic switches changing every second. The number of steps needed to get
a result makes an abacus look high-tech. But in speed per step, computers are
unparalleled.
The same
on-and-off patterns can represent letters, spaces, and punctuation marks that
the computer manipulates when an operator punches combinations of typewriter
keys labeled with such commands as "erase a line" or "move a
paragraph." Again, all the patterns of ons and offs must be meticulously
manipulated to create new patterns representing the newly processed text. In
addition, the computer must be able to tell a screen or printer to display
recognizable letters based on these switch patterns. Again, it is all
unrelentingly tedious.
If people did
mathematical manipulations or text processing by mimicking the steps
undertaken by computers, they would never get anything done. The computer's
approach involves far too many steps for each operation. But computers are so
fast that they can manipulate numbers millions of times faster than the
fastest mathematician. And this speed is so amazing, it often looks like
computers actually think better than we do.
If you ask an
astute student to add all the odd numbers that aren't divisible by three and
are bigger than 11 but smaller than 201, he or she will take at least a few
minutes to do the job, thinking all the while. But a properly programmed computer
will take the smallest part of a second to do the same task. Or if you ask a
writer to count the number of words in a book and then list them
alphabetically including the number of times each is used-well, you can
imagine how long that would take. Yet even a contemporary desktop computer
can do this in minutes, including looking up all the words in a dictionary to
find typos and misspellings. Does it think that much better than we do?
No. In fact,
the computer doesn't think at all, ever. The computer plods. It electrically
alters arrays of switches according to precise, unbreakable methodical rules.
No more.
But the
computer does this so quickly it seems to think at lightning speed. High
powered computers solve problems in days that teams of top mathematicians
would struggle over for multiple lifetimes. Even today's desktop computers
allow college students to perform in an evening all the calculations that
dozens of mathematicians would have grappled with for weeks a few years ago.
But the computer doesn't think as well as even one of those mathematicians,
or, by any sensible standards, even as well as a dog or a chipmunk or pigeon.
Perhaps one could argue that the average computer has an intelligence
quotient competitive with that of an inch-worm or some such creature. But
that's all. What the computer is, however, is fast.
Give a chess
player a board with a midgame position and ask her to pick a move. After a
few moments, depending on her ability, she will have one. She has calculated
patterns of possible moves-if I do this, what might the opponent do and how
do I like the result but mostly she has intuitively honed in on the most
likely type of move that should be made. The good player, in particular, will
examine only a few different options because she will intuitively close in on
a good move just by "feeling" the position's character.
Asked the
same question, however, the computer calculates the pattern mechanically
according to predetermined unchanging rules put in by programmers who did all
the thinking. But it does this for many more options-perhaps a million more
per second-than any player would ever dream of considering. The computer
doesn't make the intuitive leaps of even a middling player, but nonetheless
the best computers can already beat all but a very few human players.
Computers do
what we tell them, very quickly. They do not intuit or think. Perhaps the
best indication of this is, ironically, that computers do not make mistakes.
There are no wrong analogies or computer leaps of logic that lead to wrong
results. Computers simply follow orders without knowing what they are doing,
just as a radio follows "orders" to play louder or softer without knowing what "louder" or
"softer" means.
If we tell a computer
to solve a difficult equation for some variable, it does not think about the
equation and assess it, but instead rushes, willy-nilly, to solve it by a
series of steps preprogrammed by a human programmer. If a student tried to
solve problems this way, he or she would fail most of the time. But the
computer rarely fails.
So, computers
can be used to store and speedily manipulate mathematical and linguistic data
according to any rules programmers can embody in computer
"software." For a long time it seemed that computers would only be
good at dealing with numbers. If this had been true, it would have meant
computers could only be used to answer questions that could be quantified. In
a workplace, therefore, we would use a computer for storage and manipulation
of quantifiable data. Having a computer might even propel us to rely
increasingly on quantitative calculations to the exclusion of qualitative
considerations, since the latter would be too difficult to manipulate on the
computer.
And in fact,
however we refine computers, it is true that there will always be a
difference between their quantitative and qualitative capabilities. For the
former, the computer can do every kind of logical manipulation that people
can, only much faster. For the latter, however, though the computer can store
qualitative information in the form of words or pictures, it cannot
manipulate the represented values, judge them, compare them, or extrapolate
from them in even a fraction of the ways humans can. Computers can help people
do these tasks but cannot do them in place of people.
Another
criticism of computers is that they are not "the real thing." When
we communicate through computers we are not communicating directly, or when
we play simulated games on computers we are not playing the real games. But
this is only a problem in one sense. It is true that if we use computer
communications and simulations to take the place of accessible human
communications and real-world experiences, then the "unreality" of
computer activity becomes negative. But if we use computers to facilitate
something akin to human interaction where
otherwise there would be no interaction, and something akin to real
experience where otherwise there would
be no experience, then the computer can enhance communication and
experience.
A last and
more subtle complaint about computers is that people tend to bestow them with
unwarranted authority. It is likely a complex product of modem culture and
personality types that people seem disposed either to hate and fear
computers, or, having gotten over that, to attribute to them almost unlimited
capacity for accuracy. Both reactions have a basis in fact, of course, but
both are also exaggerations.
Regarding
hate and fear, the issue is the extent to which computers are harbingers of
justice and pleasure, or of surveillance and alienation. The rest of this
chapter addresses this, so let us pass on it for the moment. Regarding
computer accuracy, we need to say something here.
If we are asking computers to make calculations, no matter how
complex, it is reasonable to have faith in the answers we get assuming that
the program is well tested and debugged though in many cases this is a
difficult task and the possibility of "bugs" should not be
minimized. But if we are using a computer simulation to try to understand a
complex real-world situation and predict its possible responses to human
interventions, much more care is
called for. Just because we think we have written a program that embodies all
that is important about the workings of a nuclear plant or an economy, and we
think the program has been successfully debugged, we should not be
overconfident that when the screen tells us the plant will always be safe or
the economy will never exceed ten percent unemployment this will prove true.
Computer
simulations are based on models of
real phenomena, which must capture all the elements of reality relevant to
outcomes we are interested in if they are not to be misleading. If we do not
understand the phenomena fully enough to capture all these elements, or if
we are ideologically constrained from capturing all these elements, then no
matter how accurately we transfer the model into a computable scheme and no
matter how efficiently and elegantly we write the code for its program, the
computer will yield incomplete and thus inaccurate projections. This is no
different than the simple and obvious truism that whenever anyone tries to
predict how a situation will change in the future, he or she does so based on
assessing some number of variables in light of some beliefs about how their
status affects future trajectories. If these beliefs are wrong or if
important variables are left out, we get wrong answers. Likewise for the
computer. The problem is that with people we anticipate this fallibility.
With computers, if we become entranced with their power, we tend to forget
this fallibility, especially when they tell us what we want to hear-of course
because we built what we want to hear in the model they are using. The only
solution is caution induced by recognition of the limitations of model
building.
Important as
these considerations are, ultimately computers deal only with information.
And it is the quality of this information, not computers, that makes the
so-called "computer revolution" a possible thrust toward greater
democracy. For unlike many products of human labor, information does not come
in clumps so that if you take some there is less left for me. Once anything
is known, we can all know it. There is nothing to run out of. Once an idea
exists, there is enough for everyone. We need only distribute it. The
potential democratizing character of computers derives from the possibility
that computers may elevate information as the main currency of social life,
so there will no longer be a scarcity of the most important determinant of
well-being.
But is this
an inevitable outcome of the production and dissemination of computers? Will
democracy necessarily expand with universal availability of knowledge? Or
does it depend on what kind of society we put all these computers to work in?
Is it possible to preclude most people from access to computers and
information even if there is little or no social cost to providing access for
everyone? Worse, is the information-age rhetoric a hoax obscuring the fact
that computers lead inexorably toward more inequality, more hierarchy, and
more poverty of means of expression for some alongside a growing abundance of
means of expression for others? To answer these. questions, we must consider
computers in the context of specific social systems.
|