I
ronically,
there is little about New York City’s beloved MetroCard to
suggest forensic sophistication. Its single notched corner, ostensibly
to orient the blind, seems to belong to that class of minor traumas—chipped
teeth and mashed knuckles—one associates with an NYPD holding
cell rather than Riker’s. Its inoffensive typography, sunny
side up like a motorman’s late breakfast, is a faux Superman
logo and a plea to the swift and clumsy that one should calmly “insert
this way”; the adjoining slash gives one a moment to reflect
before it deadpans a quiet clarification, “this side facing
you.” Its one visibly distinguishing characteristic, exiled
to the pictographic clutter of the flipside, is a serial number
imprinted onto a nonreflective white strip. This number, however,
is what distinguishes the MetroCard from most other municipal transit
passes: with every swipe through a subway turnstile, the MetroCard
transmits its unique serial number, the current time and date, and
its location to the Metropolitan Transit Authority’s data center,
where the information remains until systematically purged or reprieved
in the course of an investigation by the Transit Bureau’s special
investigations unit. MetroCard logs have been audited in recent
years to discredit suspect alibis in a number of NYPD investigations,
including the December 1999 assault and robbery of a Manhattan supermarket
manager following which a suspect’s MetroCard recorded a swipe
in lower Manhattan’s South Ferry subway station despite the
accused’s claim that he had spent the day in question on Staten
Island.
Although
a discredited alibi may ultimately serve as little more than blood
in the water for a successful prosecution, law enforcement agencies
throughout the world have embraced the technique of electronic auditing
as a sure sign of things to come in the burgeoning field of forensic
technology. Automated toll tracking, financial transaction analysis,
cellular phone triangulation, Internet monitoring, email decryption,
DNA analysis, video surveillance, and facial recognition software
have all become sought-after forensic technologies in recent years,
seeming at times to have relegated composite sketch artists and
purveyors of dusty latents to the pages of a remaindered Mickey
Spillane whodunit. Stoked by periodic warnings of malicious hackers,
identity theft, and the reliably impenetrable Russian mob—and
not unmindful of the renewed popularity of TV forensic procedurals
like “CSI”—law enforcement agencies have responded
annually with exponentially higher budgets for cutting-edge equipment
and training, invariably showing up at technophile trade shows like
Comdex in search of the latest retinal scanner or low-light video
camera.
What
generally isn’t mentioned on the trade show floor, however,
is the dirty little question of authentication: is possession of
a transit card sufficient to lock down a suspect’s itinerary
and is a false alibi necessarily a guilty plea? How reliable is
video footage captured by an ATM and would you be able to pick a
suspect out of a lineup based on a low resolution black-and-white
image? How secure is a web-based email account from misappropriation
of identity? Forensic technology relies on the fact that as citizens
and consumers we unconsciously leave both physical and electronic
evidence in our wake—by analyzing individual nodes in a suspect’s
datastream (a credit card purchase here, a subway transfer there),
investigators can piece together events and motives surrounding
a criminal act. Unfortunately, many of the aforementioned technologies
rely on authentication by proxy, the tacit assumption that a piece
of evidence was physically connected in some fashion to the accused
at the time of a crime. While this might appear a pretty safe assumption
in most criminal investigations, that negligible element of ambiguity—the
possibility, perhaps, that a suspect’s credit card was surreptitiously
borrowed or its number and expiration date appropriated following
a carelessly handled transaction, or that a discredited alibi hides
not a murder, but a hotel tryst—invariably results in an accumulation
of what the courts have long recognized as circumstantial evidence
and raises the disturbing possibility that America’s current
forensic renaissance is creating a culture of DNA Lite, a judicial
tautology in which forensic jargon, technology boosterism, and the
unrelated success of DNA fingerprinting somehow manage to sell the
notion that if it looks like a duck and quacks like a duck, it’s
clearly guilty of being a duck. Even should the courts resist the
continued temptation to aggrandize circumstantial evidence obtained
through bleeding-edge technology, taxpayers and civil libertarians
may justifiably ask whether the cost, both financial and ethical,
is worth the feverish acquisition of evidence with ultimately limited
prosecutorial weight.
Nearly
a quarter of a decade after investigators in Leicestershire, England
came up with a rather novel method of solving two related homicides,
the undisputed 500-pound gorilla of forensic technology today remains
DNA analysis. As familiar to schoolchildren today as an iconographized
Albert Einstein (and almost as poorly understood), the DNA double
helix is a complex arrangement of protein molecules, which architects
the distinctive genetic characteristics of a living organism. Using
hair, blood stains, semen, bone marrow, or any other tissue or bodily
fluid that contains nucleated cells, a suspect’s characteristic
genetic markers can be visualized onto x-ray film in a barcode-like
pattern and then compared with genetic material retrieved from a
crime scene. Theoretically all but infallible, DNA analysis came
of age in America during the frenzied 1995 celebrity murder trial
of O.J. Simpson, during which genetic evidence was clouded by accusations
of mishandling of blood samples and crime scene tampering. Despite
its inauspicious 15 minutes in the limelight, DNA analysis has continued
to play an aggressive role in criminal investigations and surprise
exonerations, prompting the state of Illinois in January 2000 to
declare a moratorium on executions until death row inmates could
be allowed the opportunity to undergo genetic sequencing, a decision
made after 13 Illinois death penalty cases were overturned due to
an embarrassment of exculpatory evidence. Myriad technical advancements
have been implemented since DNA first burst into the courtroom like
an overzealous DA and a laboratory process, which once took weeks
can now be completed in just hours for the same price as dinner
for two at a midtown Manhattan restaurant. Although serious concerns
have been raised about the unregulated warehousing of private genetic
data, DNA analysis is now widely recognized as a crime investigator’s
Remington flathead—the one tool that no paid professional should
be without.
Genetic
sequencing is, however, considered merely a subset of a much broader
pursuit of human-machine interaction popularly known as biometrics.
Retinal scanning, voiceprint identification, fingerprint deconstruction,
handwriting analysis, keyboard dynamics, and video surveillance
are all biometric technologies, since they attempt to identify unique
physiological characteristics of a human subject. Taxonomy notwithstanding,
though, none of the aforementioned—as shall be seen—approach
DNA analysis as a quantitatively reliable forensic tool and, while
biometric forms of authentication by definition would appear to
be all but impossible to “hack,” the greatest threat to
biometrics isn’t fraud at all, but accuracy. Deployed “in
the wild,” many biometric detectors fail to identify legitimate
users or inappropriately recognize strangers (“false positives,”
in the prevailing security lexicon). The human form, as it happens,
is not so easy to quantify.
If
you haven’t visited NORAD’s Cheyenne Mountain stronghold
recently, you’re perhaps unaware that optical-, voice-, and
fingerprint-based technologies are no longer reserved for Blade
Runners and 007’s late-model Aston Martin. Biometric sensors
are turning up more and more in various restricted military and
financial facilities with some speculative crossover into consumer
applications, although their relative paucity limits their likelihood
as forensic tools. All three are among the most accurate forms of
biometric authentication on the market, since they require close
interaction (generally anywhere from a few feet away to actual contact)
and a certain amount of time to achieve an accurate scan (a few
seconds, on average). Retina scans rank among the most creepily
reliable matches, employing a high resolution analysis of distinctive
blood vessel patterns in the eye. Iris scans are significantly less
conclusive, particularly when obtained from a distance of several
inches or more as is generally the case when used in conjunction
with video surveillance systems. Both systems presumably benefit
from the visceral unease of going eye-to-eye with the machine and
being first to blink.
Voice
recognition technology, on the other hand, remains plagued by tonal
variation and lack of clarity, a far cry from the contemplative
percipience of Kubrick’s HAL 9000, and barely manages to maintain
a tentative foothold in the consumer marketplace by requiring users
to undergo lengthy voice training exercises during which the system
learns to recognize an individual’s ostensibly annoying speech
patterns. Fingerprints have lately come to acquire a similar stigma.
Whether acquired digitally, using a CCD (a “charge-coupled
device,” such as one might find in a digital camera), or lifted
from a handgun with loctite cyanoacrylate, crime scene latents still
often are an exercise in interpretation, relying on a statistical
accumulation of matching loops, whorls, and arches to trigger a
probable match.
Handwriting
and keyboard dynamics analysis are of a particularly ambiguous class
of biometric technology. Handwriting analysis has a long and contentious
history of ambiguous interpretation, which it shares with the polygraph,
making it one of the least valuable forms of evidence in most fraud
cases and lending it the aura of a vaguely unsavory profession practiced
by retired civil servants and women with a few too many cats. Although
technology-assisted handwriting analysis takes into account several
heretofore unacknowledged characteristics of a subject’s signature,
such as speed and stylus pressure, a quick glance through one’s
cancelled checks is enough to reveal handwriting analysis’s
Achilles heel: signatures, like snowflakes, are never exactly alike.
Keyboard
dynamics is, in a sense, handwriting analysis’s digital progeny:
by analyzing a subject’s distinctive hunt-and-peck strategies
at the keyboard, the system is said to distinguish quantifiable
differences in user training, reflexes, and habit. Since accuracy
is directly related to the length of a subject’s text sample—and
a significant sample is often required to perform even a superficial
analysis—many security professionals see this particular form
of biometric technology as less than viable in most authentication
scenarios.
Video
surveillance may currently be the most popular of all biometric
technologies, with the exception of DNA sequencing. The two are
extremely dissimilar technologies: whereas DNA analysis requires
a physical sample within which to isolate genetic material, video
surveillance and “eigenface” (facial recognition) systems
are limited to a cursory glance across a crowded room. Within seconds
at best, a video surveillance system must deconstruct musculature,
bone structure, and other distinctive facial features, compare these
algorithmically with a database of suspects, and judiciously decide
whether the individual humping across its field of view in what
may well be a wig and glasses is a confidant of Osama bin Laden.
As a January 2002 report by the ACLU discusses, such systems are
predictably error-prone. Based on an installation of the Visionics
Corporations’s Face-IT system for the Tampa Police Department,
the ACLU concluded that the system failed to identify a single suspect
in its database and generated innumerable false positives before
being discontinued. A trial run of this same software by
PC Magazine
revealed how a crafty tester used a color printer to create a photograph
of a face familiar to the system, cut a nose hole to give the mask
appropriate depth if an unflattering likeness, and achieved access
as an apparently legitimate user.
Despite
the dubious success rate of many biometric technologies, law enforcement
agencies continue to court Silicon Valley solutions to problems,
which might benefit more from competent policework and investigative
rigor. Of equal concern is the increasing monitoring and potential
abuse of non-biometric forms of authentication in the pursuit of
criminal prosecutions. While biometric technologies require some
form of contact with the subject, many alternate forms of authentication
do not. Password and PIN access, for instance, are easily subverted
by non-material theft (a subject may not even be aware that his
or her authentication has been compromised), as are email accounts.
Both the National Security Agency’s multinational Echelon system
and the FBI’s Carnivore system monitor email traffic with little
or no judicial oversight, compiling profiles on suspicious individuals
and subversive organizations based on the incidence of keywords
in unencrypted and loosely authenticated electronic correspondence,
raising the possibility that you can be flagged if someone with
your name or a similar email address happens to muse in print that
Scott Ian, guitarist of Anthrax fame, is da bomb.
Cellular
phones and transit cards also employ non-biometric forms of authentication,
since they can be used by someone other than the accused (or employed
by the accused for purposes which may only seem to support a prosecutor’s
interpretation of events). As any criminal attorney will avow, there’s
a big difference between putting the accused at the scene of the
crime and putting the accused down the street at an ATM. Despite
the limited prosecutorial weight of technologies such as cellular
phone triangulation (the “homing in” on a cellular phone
by the strength of its signal relative to nearby transmitting towers),
law enforcement policymakers continue to pursue expanded legislation
for tracking wireless devices and intercepting packet-mode communications
(e.g., text messages used by alphanumeric pagers), if only for the
aura of authority that accompanies the introduction of technology-laden
evidence in a criminal trial.
Biometric
or not, authentication protocols and technologies inevitably shift
the burden of proof to the accused, leaving it to defense attorneys
to establish a clear demarcation between a suspect and his or her
various authenticated identities, lending certain trials the eerie
foreshadowing of what the ACLU describes in its report on video
surveillance systems as a “move to permanently brand some people
as ‘under suspicion’…” While few would debate the
relative value of circumstantial evidence to criminal prosecution
or fail to appreciate the advantages of surveillance technology
in the conviction of organized crime figures, the growing popularity
of security scans, biometric algorithms, and serial numbers may
seem more than a bit disconcerting. Security is by all counts a
multibillion dollar industry in the United States, with significant
marketshare in the casino gaming industry, among others. Unsurprisingly,
a disproportionately small amount of this expenditure seems to trickle
down through the justice system to the laboratories that analyze
DNA in both criminal prosecutions and appeals, leaving much of the
financial burden to The Innocence Project and other
pro bono
legal funds that utilize genetic evidence to defend the wrongly
accused and indigent. Although forensic technologies will undoubtedly
continue to assist in the conviction of actual felons, the rest
of us will have to content ourselves with the tenuous protection
of the First Amendment and the promise of a safer casino.
Timothy
Quinn is a writer and technology professional. He currently lives
and works in New York City.