Copy Link
Add to Bookmark
Report

The Braille Monitor 9401

eZine's profile picture
Published in 
The Braille Monitor
 · 5 years ago

  

THE BRAILLE MONITOR



Barbara Pierce, Editor


Published in inkprint, Braille, on talking-book disc,
and cassette by


THE NATIONAL FEDERATION OF THE BLIND
MARC MAURER, PRESIDENT



National Office
1800 Johnson Street
Baltimore, Maryland 21230

* * * *



Letters to the President, address changes,
subscription requests, orders for NFB literature,
articles for the Monitor, and letters to the Editor
should be sent to the National Office.

* * * *



Monitor subscriptions cost the Federation about twenty-five
dollars per year. Members are invited, and non-members are
requested, to cover the subscription cost. Donations should be
made payable to National Federation of the Blind and sent to:


National Federation of the Blind
1800 Johnson Street
Baltimore, Maryland 21230

* * * *

THE NATIONAL FEDERATION OF THE BLIND IS NOT AN ORGANIZATION
SPEAKING FOR THE BLIND--IT IS THE BLIND SPEAKING FOR THEMSELVES



ISSN 0006-8829 THE BRAILLE MONITOR
A PUBLICATION OF THE NATIONAL FEDERATION OF THE BLIND

CONTENTS

JANUARY, 1994

PROCEEDINGS OF THE 2ND U.S./CANADA CONFERENCE
ON TECHNOLOGY FOR THE BLIND

NOTE FROM THE CHAIRMAN
by Kenneth Jernigan

LIST OF CONFERENCE PARTICIPANTS

THE 64 SQUARES OF THE CHESS BOARD
by Raymond Kurzweil

EMERGING RESEARCH GOALS IN THE BLINDNESS FIELD
by T. V. Cranmer

INTERNATIONAL COOPERATION IN THE FIELD OF TECHNOLOGY: AN AGENDA
FOR ACTION TOWARDS THE 21ST CENTURY
by Ruperto Ponz

OBSERVATIONS ON THE STATE OF TECHNOLOGY FOR THE BLIND
by David Andrews

PRIDE AND PROFIT: OBSERVATIONS OF A FREE MARKETEER
by Tony Schenk

LISTENING FOR EFFECTIVENESS
by James Morrell

SUMMARY OF THURSDAY AFTERNOON DISCUSSION

PROBLEMS AND CHALLENGES OF THE GRAPHICAL USER INTERFACE
by James Thatcher

A QUESTION OF WINDOWS
by James C. Halliday

PROBLEMS AND CHALLENGES OF THE GRAPHICAL USER INTERFACE
by Curtis Chong

SUMMARIES OF PRESENTER REMARKS

SUMMARY OF FRIDAY CONFERENCE DISCUSSION

SUMMARY OF SATURDAY CONFERENCE DISCUSSION



Copyright National Federation of the Blind, Inc., 1994[LEAD
PHOTO: North-South view of the International Braille and
Technology Center for the Blind. CAPTION: On November 16, 1990,
the doors of the National Braille and Technology Center for the
Blind opened for the first time. The facility was located in the
central courtyard building at the National Center for the Blind,
and was so spacious that it was hard to believe that Braille
production and speech technology would ever fill it. During the
first U.S./Canada Conference on Technology for the Blind, held at
the National Center for the Blind in September of 1991, the
National Braille and Technology Center for the Blind became the
International Braille and Technology Center for the Blind
(IBTCB), reflecting the growing worldwide significance of the
operation. But time and technology march on, and by early 1993 it
was clear that the array of embossers, synthesizers, computers,
and related hardware and software was about to outgrow the space
available in the central courtyard building. The decision was
made to devote the street level, Johnson Street wing of the main
building at the National Center for the Blind to an enlarged and
expanded facility, complete with specially designed display
tables, spacious offices, a museum, a large conference room, and
a kitchen. Pictured here, the IBTCB covers 20,000 square feet of
prime display space. It opened just in time to be the symbol and
centerpiece of the 2nd U.S./Canada Conference on Technology for
the Blind, which occurred November 4-6, 1993.]

[PHOTO/CAPTION: Euclid Herie (left), Mary Ellen Jernigan, and
Kenneth Jernigan stand talking in the International Braille and
Technology Center for the Blind.]

[PHOTO: Conference attendees seated in 4th floor conference room
at the National Center for the Blind. CAPTION: Approximately 65
people filled the large fourth-floor conference room at the
National Center for the Blind for the opening session of the 2nd
U.S./Canada Conference on Technology for the Blind.]

PROCEEDINGS OF THE 2ND U.S./CANADA CONFERENCE
ON TECHNOLOGY FOR THE BLIND
November 4 to 6, 1993
Planned and Hosted by the National Federation of the Blind
Conference Chairman, Kenneth Jernigan

Note from the Chairman: In September of 1991 representatives
of four organizations, all members of the Committee on Joint
Organizational Effort, planned the First U.S./Canada Conference
on Technology for the Blind, which was hosted by the National
Federation of the Blind and held at the National Center for the
Blind in Baltimore. (See the January, 1992, issue of the Braille
Monitor for a full account of that important meeting.) The
gathering was significant in part because those present, senior
officials of service-providing and consumer organizations and
technology producers in the blindness field, all had the
authority to make policy decisions and set their own
organizational goals based on what they learned and on what the
group decided. One of several decisions made at the close of that
meeting was to reconvene in two years. That gathering took place
November 4 to 6, 1993, and again the National Federation of the
Blind hosted the event.
It began Thursday morning with greetings from Marc Maurer,
President of the National Federation of the Blind, and Dr. Euclid
Herie, President and Chief Executive Officer of the Canadian
National Institute for the Blind, followed by the conference
keynote address. After coffee the remainder of the morning was
devoted to presentations by four speakers, and following lunch
there was another panel discussion. The second half of the
afternoon was devoted to tours of the newly completed facility
occupied by the International Braille and Technology Center for
the Blind. A reception and dinner completed first-day conference
activities.
Friday morning began with a panel of speakers discussing the
Graphical User Interface. Following coffee and again after lunch
there were lively general discussions. The remainder of Friday
was devoted to informal, small-group or private discussions, and
on Saturday morning consumers and service providers discussed
issues of mutual interest to them.
In my view and that of a number of other participants, this
conference was, if possible, even more productive than the first
one two years ago. Much will now depend, however, on what occurs
as a result of the discussions begun in early November. It is
extremely important that the technology experts among us settle
down to work on the software and consumer-product access problems
identified during our meetings and that the consumers and service
providers then use our joint strength and creativity to influence
mass-market computer program producers and consumer-technology
manufacturers to insure that blind people have continued and
increasing access to their products.
Gathering to become better acquainted and to exchange ideas
was an important step. But the crisis facing the blindness field
today is as serious as any we have ever faced. Blind computer
users increasingly find that their inability to access the GUI-
based programs being used more and more in the workplace is
costing jobs. Unless something is done to counteract the trend,
the situation will only get worse. As if this were not bad
enough, household appliances and publicly accessible information
terminals are increasingly unuseable to anyone who cannot read
the LCD or CRT screen displays. Therefore, the future jobs of
thousands of blind people and their ability to use household
appliances and deal electronically with the rest of the world in
years to come may well depend upon our capacity today to work
together for the common good. The challenge is great. Let us all
hope that we are now in a better position than ever before to
meet it successfully. Here is the list of participants in the 2nd
U.S./Canada Conference on Technology for the Blind:

Kenneth Jernigan, President, North America/Caribbean Region,
World Blind Union, Baltimore, Maryland
Euclid Herie, Treasurer, World Blind Union; President and
Chief Executive Officer, Canadian National Institute for the
Blind, Toronto, Ontario
Marie Amerson, President, Association of State Educational
Consultants for the Visually Impaired, Macon, Georgia
David Andrews, Director, International Braille and
Technology Center for the Blind, National Federation of the
Blind, Baltimore, Maryland
Laurie Bellefontaine, National Director of Technology,
Canadian Council of the Blind
Deane Blazie, President, Blazie Engineering, Forest Hill,
Maryland
Geraldine Braak, President, Canadian Council of the Blind,
Powell River, British Columbia
John Brabyn, Program Director, Smith-Kettlewell Eye Research
Foundation, San Francisco, California
Curtis Chong, Senior Systems Programmer, IDS Financial
Services, Minneapolis, Minnesota
Charles Cook, President, Roudley Associates, Owings Mills,
Maryland
Neil Cooper, Software Engineer, Syntha-Voice Computers,
Inc., Hamilton, Ontario
Tim Cranmer, Chairman, Research and Development Committee,
National Federation of the Blind, Louisville, Kentucky
Frank Kurt Cylke, Director, National Library Service for the
Blind and Physically Handicapped, Library of Congress,
Washington, D.C.
Suzanne A. Dalton, President, Association of Instructional
Resource Centers for the Visually Impaired, Tampa, Florida
Judy Dixon, Consumer Relations Officer, National Library
Service for the Blind and Physically Handicapped, Washington,
D.C.
Frederick Downs, Jr., Director, Prosthetic and Sensory Aids
Service, Veterans Health Administration, Washington, D.C.
Shirley Dupmeier, Member of National Council, Consumer,
Canadian National Institute for the Blind, Toronto, Ontario
Paul Edwards, American Council of the Blind, North Miami,
Florida
Carl E. Foley, President, Blinded Veterans Association,
Washington, D.C.
Paul Fontaine, Specialist, Clearinghouse on Computer
Accommodations, General Services Administration, Washington, D.C.
Jim Fruchterman, President, Arkenstone, Sunnyvale,
California
Ritchie Geisel, President, Recording for the Blind,
Princeton, New Jersey
Greg Guidice, Vice President of Marketing for Adaptive
Products, Xerox Imaging Systems, Inc., Peabody, Massachusetts
James C. Halliday, President, Humanware, Inc., Loomis,
California
Ted Henter, President, Henter-Joyce, Inc., St. Petersburg,
Florida
Raymond Kurzweil, President, Kurzweil Applied Intelligence,
Waltham, Massachusetts
Mary Frances Laughton, Chief, Social and Informatics
Applications, Industry and Science Canada, Ottawa, Ontario
Jose Luis Lorente Barajas, Technical Advisor, Spanish
National Organization of the Blind (O.N.C.E.), Madrid, Spain
Greg Lowney, Senior Program Manager, Accessibility and
Disability Group, Microsoft Corporation, Redmond, Washington
Gary Magarrell, Executive Director, Ontario Division,
Canadian National Institute for the Blind, Toronto, Ontario
Vicki Mains, National Manager, Technical Aids, Canadian
National Institute for the Blind, Toronto, Ontario
David Mansoir, Chairman, Information and Technology
Division, Association for Education and Rehabilitation of the
Blind and Visually Impaired, Mountain View, California
Marc Maurer, President, National Federation of the Blind,
Baltimore, Maryland
Barbara McCarthy, President-Elect, Association for Education
and Rehabilitation of the Blind and Visually Impaired, Richmond,
Virginia
Dale McDaniel, Vice President for Marketing, Artic
Technologies, Troy, Michigan
Peter Merrill, President, The Betacom Group, Mississauga,
Ontario
James Morrell, President, TeleSensory Corporation, Mountain
View, California
Caryn Navy, Vice President, Raised Dot Computing, Madison,
Wisconsin
Gilles Pepin, Director, Visuaide 2000, Inc., Longueil,
Quebec
Ruperto Ponz Lazaro, Chairman, World Blind Union/Committee
on Technology, Madrid, Spain
Lloyd Rasmussen, Senior Staff Engineer, National Library
Service for the Blind and Physically Handicapped, Library of
Congress, Washington, D.C.
Rachel Rosenbaum, President, National Council of Private
Agencies for the Blind, Newton, Massachusetts
Noel Runyan, President, Personal Data Systems, Campbell,
California
Mohymen Saddeek, President, Technology for Independence,
Inc., Boston, Massachusetts
James Sanders, National Director, Government Relations and
International Services, Canadian National Institute for the
Blind, Toronto, Ontario
Leroy Saunders, President, American Council of the Blind,
Oklahoma City, Oklahoma
Tony Schenk, President, Enabling Technologies Company,
Stuart, Florida
Elliot Schreier, Director, National Technology Center,
American Foundation for the Blind, New York, New York
Larry Skutchan, President, Microtalk, Texarkana, Texas
Yakov Soloveychik, President, BAUM U.S.A., Encino,
California
Susan Spungin, Associate Executive Director for Program
Services, American Foundation for the Blind, New York, New York
Joe Sullivan, President, Duxbury Systems, Inc., Littleton,
Massachusetts
Marc Sutton, Access Products Manager, Berkeley Systems,
Inc., Berkeley, California
Stephen Tappin, Technical Aids Coordinator, Ontario
Division, Canadian National Institute for the Blind, Toronto,
Ontario
James Thatcher, Researcher, IBM Corporation, Yorktown
Heights, New York
Tuck Tinsley, President, American Printing House for the
Blind, Louisville, Kentucky
Jocelyne Tremblay, Director, Direction des Services
hors-Quebec et programme d'aides technique, Sillery, Quebec
Louis Tutt, President, Council of Schools for the Blind,
Baltimore, Maryland
David Vogel, President, National Council of State Agencies
for the Blind, Jefferson City, Missouri
Dennis Wyant, Director, Vocational Rehabilitation and
Counseling Services, Veterans Administration, Washington, D.C.
____________________
Following are the texts of the presentations made at the
conference. Summaries of those remarks not submitted in writing
for publication as well as highlights of the discussions held
during conference sessions are also included.


[PHOTO: Portrait. CAPTION: Raymond Kurzweil.]

THE SIXTY-FOUR SQUARES OF THE CHESSBOARD
by Raymond Kurzweil

From the Editor: Dr. Kurzweil was the founder of Kurzweil
Computer Products and the creative genius behind the Kurzweil
Reading Machine, which in the mid-seventies was the first
successful attempt to scan printed text and read it aloud using
computer technology and artificial speech. Dr. Kurzweil has gone
on to pursue other interests and now heads another company,
Kurzweil Applied Intelligence. He continues to be interested in
computer access for blind people, and he accepted the invitation
to keynote the 2nd U.S./Canada Conference on Technology for the
Blind. This is what he said:

Having been in this field for many years, I was impressed
with Dr. Jernigan's ability to gather together all the leaders of
this field in the first of these conferences. That was certainly
a stirring and unprecedented event. This meeting actually marks a
personal milestone for me. This week marks exactly twenty years
that I have been in this field. I started Kurzweil Computer
Products to develop a reading machine for the blind exactly
twenty years ago with a $33,000 grant from Johnson and Johnson.
The discussion at that meeting and the recommendations that came
forth were quite substantial; and, given the expanded prominence
of this group here, this is a very propitious meeting as well.
Dr. Jernigan asked me to talk to you today about the nature
of information technology and the impact it is having on our
world. We live in a world today in which all of our knowledge,
all of our creations, all of our insights, all of our ideas, our
cultural expressions--pictures, movies, art, sound, music, books,
and the secret of life itself--are all being digitized, captured,
and understood in sequences of ones and zeroes.
I speak to many different groups: computer scientists and
engineers, librarians, musicians, magazine publishers, doctors,
graphic artists, architects, researchers of different kinds. All
of them, in diverse ways, are experiencing the same thing: the
digitization of their knowledge bases, their methods, and the
expressions of their work. Those of you who are working with
information technology to help overcome communication barriers
understand the significance of these developments all too well,
and that's one reason I've been looking forward to speaking with
you this morning.
As we'll discuss a little later, we will have the technology
in the next ten to fifteen years largely to overcome the
handicaps that are associated with visual, auditory, and other
disabilities. What I like to call the age of knowledge is also
transforming the nature of wealth itself and affecting deeply all
of our political and economic institutions. So I'd like to talk
to you today about the changing nature of wealth. I'd also like
to share from my own experience a strategy for fostering
innovation in this emerging information age. And then we'll talk
a little bit about the impact of all this technology on the next
decade and the next century.
Now you may have noticed that there seems to be an intense
preoccupation of late with the economy; and, if you've paid
attention to the news in recent months, you know that economic
issues have been dominating the national consciousness. But for
all of the attention, I think most people still find the subject
confusing. I'm reminded of the economics professor who year after
year has the questions to his final exam stolen by his students
and explains why he's really not upset about this. "The questions
are always the same anyway," he points out; "it's the correct
answers that keep changing." A lot of people feel that's why
economics differs from science. But if you pay close attention to
computer science, you know that the correct answers keep changing
here too.
Today the correct answer to the question of how to advance
economic competitiveness is to foster the creation of
intellectual property, which is information--that is sequences of
1's and 0's that have economic value. That has not always been
the case in human history, and I would like to share with you my
view of why the world has changed in this way.
Now I'm not an economist, but that won't inhibit me from
sharing with you economic opinions. Very few people are inhibited
from expressing economic opinions. That is, I am sure, very
frustrating to economists. My own background is in signal
processing and pattern recognition, and very few people express
opinions about signal processing and pattern recognition, which
is another reason I have been looking forward to talking to you
today, for you are a group that understands the importance of
accessing patterns of information.
In this field we recreate worlds inside the computer. And
the ramifications of doing this go far beyond mere pictures and
sounds but are part of a transformation of society the
implications of which we are only beginning to understand. They
say that war is too important to leave to the generals; I've
always felt that signal processing and pattern recognition are
too important to leave to us engineers. Ours is the first
generation to live through the early stages of what I've called
the Age of Intelligent Machines.
Translating information from one medium to another,
transforming visual information to auditory information, for
example, has always inherently been on the cutting edge of
computer architectures, so we have perhaps a clearer vision than
most of the importance of this era, but do we truly understand
the significance of the invention of the computer?
When the telephone, another communication technology, was
first invented, the chief engineer of the British post office
dismissed the news with the statement, "This is no big deal; we
have plenty of messenger boys." The mayor of Philadelphia had
considerably more insight into the importance of this new
development. "This is of great significance," he said. "Someday
every city will have one." The first programmable computer ever
built was the Zuse-3, built by a German tinkerer named Konrad
Zuse. He presented his invention to his original sponsor, the
German Aircraft Research Institute of the Third Reich. The chief
engineer in charge explained to Zuse, "The German aircraft is the
best in the world. I cannot see what we could possibly calculate
to improve on," and they withdrew their funding.
The first commercially produced electronic computer, the
Univac, was built by Remington Rand. They commissioned a market
study which concluded that eventually, someday, a worldwide
market would develop for fifty computers. If I look back on my
own career, I think I have bought those fifty computers myself.
Today we may have more vision than the chief engineer of the
British post office or the chief engineer of the German Aircraft
Research Institute, but the common wisdom is, I believe, perhaps
more akin to the view of the mayor of Philadelphia. Society today
sees the computer as a mere facilitator of information, as a tool
that provides some added efficiency. But I see it as something
quite different. To me, the emergence of machine intelligence can
be seen from two perspectives: as a turning point in human
history and as a major milestone in the evolution of life on this
planet, more significant than when animals first left their
watery habitats and took their first steps on land.
Let me first share with you the former perspective--the
significance of machine intelligence in the history of mankind.
This will lead us to the latter perspective, the view from the
flow of evolution.
The industrial revolution of the last two centuries, the
first Industrial Revolution, was characterized by machines that
extended, multiplied, and leveraged our physical capabilities.
With these new machines humans could manipulate objects for which
our muscles alone were inadequate and carry out physical tasks at
previously unachievable speeds. As a result the world during this
period was hungry for natural resources and labor.
Mao said that "power comes from the barrel of a gun." And
that statement was true when he said it. But he said it in the
last possible decade in which one could make that statement,
because through physical coercion you could control natural
resources. If you could control natural resources and compel
people to labor, you could control wealth. And while not
providing the happiest or most productive workers, it worked well
enough.
The second industrial revolution, however, the one that is
now in progress, is based on machines that extend, multiply, and
leverage, not our physical, but our mental abilities. A
remarkable aspect of this new technology is that it uses almost
no natural resources. Silicon chips use infinitesimal amounts of
sand and other readily available materials. They use
insignificant amounts of electricity.
As electronics, computers, and other forms of information
technology, (bioengineering, for example) grow smaller and
smaller, the material resources utilized are becoming an
inconsequential portion of their value. Indeed, software uses
virtually no resources at all. If you pay three or four hundred
dollars for a screen reader with synthetic speech, you know that
you're not paying for the natural resources in this product,
because you could purchase the same floppy disks without
information on them for three or four dollars.
People then say, "Okay, that's true for this strange new
world of software, but that's still a small part of our gross
national product. It has little to do with economics." What a lot
of people don't realize is that the same economic model holds for
most hardware as well. The quintessential component of hardware
is the computer chip. A central processing unit or an advanced
signal processing chip or an advanced image processing chip that
may sell for three or four hundred dollars costs no more to
fabricate than a floppy disk. As with a software program, the
bulk of the cost of a chip is neither raw materials nor
manufacturing labor, but rather what accountants call
amortization of development and what philosophers call knowledge.
It is estimated that for software today the percentage of
their value represented by natural resources is about 2%. When we
have full electronic distribution of software, it will go down to
about 0%. I guess we'll use a little bit of electricity in the
process.
The percentage of value represented by natural resources for
chips is about the same today as for software, about 2%.
Computers are about 5% natural resources because we still have a
metal chassis and a power supply, but we won't have those for
very much longer. In fact, you can draw a reverse exponential
curve where the y axis is the percentage of value of a product
represented by natural resources and the x axis is time, and the
percentage of value represented by natural resources is
asymptoting to zero as we go forward in time, and every product
and service is on the curve. Some are closer to zero than others,
and some categories of products are moving faster than others as
they move down the curve, but every product is on the curve,
marching on down to nearly zero contribution from material
resources and nearly 100% contribution from intellect.
Indeed, over the past twenty years the value of commodity
resources, as measured in constant dollars, has fallen
substantially (about forty percent), and this trend is
accelerating. So sell short on natural resource stocks. That
will be my only stock tip for today. Other than to mention that
my own company just went public two months ago. Okay, let's take
some examples.
Musical instruments is a field I've had experience in. The
percentage of value represented by natural resources and labor
for instruments using the nineteenth century acoustic technology,
such as pianos with their hundreds of feet of wire and hundreds
of pounds of metal and wood, is very high--it's about 60 to 70%.
For electronic musical instruments, which are basically
computers, the figure is 5 to 10%. In this industry we are
gradually replacing acoustics with electronics.
Consider pianos. And by the term "piano," I am not referring
to synthesizers, but rather to instruments that look like a piece
of furniture that you put in your living room for your eight-
year-old daughter to use while she's taking piano lessons. Six
years ago the percentage of pianos that used electronic
technology was 4%. Today, it's 60%, and again I'm not including
synthesizers or portable keyboards. In two to three years that
figure is expected to hit 80%. So, if you take the industry as a
whole, ten years ago the value of musical instruments was 60%
natural resources; today it's down to 20%, and in five years it
will be 10%.
How about the chairs you're sitting in? I recently toured
newly constructed factories in the Far East, and it was an
impressive display of the ability to convert intellect into
products with value. Bags of plastic pellets, jars of silicon,
and other inexpensive materials get turned into an astonishing
variety of high-quality products from tables and chairs to radios
and computers by computerized factories with almost no human
intervention. One has only to tour these factories with their
delicately programmed robotic assemblers and material handlers to
recognize the increasing dominance of knowledge as the
cornerstone of wealth.
Speaking of the Far East, despite the recent Asian recession
the success of Japan is undeniable, and it's certainly not due to
their natural resources, because they don't have any to speak of.
The success of Japan, which two years ago was cited by Fortune
Magazine as the wealthiest nation on the planet, is due entirely
to their ability to create intellectual property in all of its
myriad forms.
And how about Communism? Anyone here remember Communism? It
was that totalitarian system that disappeared a few years ago.
Well, I guess it's still around in a few places. So why did it
collapse when it did? Was it because after seventy years it had
just run its course? Was it the effectiveness of the Voice of
America? The fear of Ronald Reagan? The fear of George Bush? I
mean, why did it collapse just now? We should keep in mind that
it was the bankruptcy of Communism as an economic strategy that
caused its downfall. Communism was indeed viable during the late
stages of the first industrial revolution. It became irrelevant
only as we entered the second.
It's a fortunate truth of human nature that, whereas labor
can be forced, creativity and innovation cannot be. To create
knowledge, people need the free exchange of information and
ideas. They need free access to the world's accumulated knowledge
bases. A society that restricts access to copiers, mimeograph
machines, and typewriters for fear of the dissemination of
uncontrolled knowledge will certainly fear the much more powerful
communication technologies of personal computers, local area
networks, telecommunication data bases, electronic bulletin
boards, and all of the multifarious methods of instantaneous
electronic communication.
Controlled societies such as the former Soviet Union were
faced with a fundamental dilemma. If they provided their
engineers and professionals in all disciplines with advanced
computer technology, they were opening the floodgates to free
communication by methods far more powerful than the copiers they
had traditionally banned. On the other hand, if they failed to do
so, the professionals became increasingly ineffectual. In the end
they did a little of both, and both did them in. The lack of true
intellectual freedom caused economic disaster. And to the extent
that electronic communication was made available, it made
totalitarian control impossible. It was said that the 1991 August
coup in the former Soviet Union was undone by cellular telephones
and networks of personal computers. And that's true.
My company has a Russian research institute in Moscow, and
it would be only a small exaggeration to say that it is as if
they are working down the hall. We exchange messages, memos,
data, software on a daily basis through the Internet. And they
have shared with us that this type of network of personal
computers and cellular telephones were critical to unraveling
that coup.
Innovation, however, requires more than just computer
workstations and electronic communication technologies. It also
requires an atmosphere of tolerance for new and unorthodox ideas,
the encouragement of risk taking, and the ability to share ideas
and knowledge. A society run entirely by government bureaucracies
is not in a position to provide the incentives and environment
needed for entrepreneurship and the rapid development of new
skills and technologies.
So, if innovation and invention, which is to say the
creation of knowledge that has economic value, is increasingly
the cornerstone of wealth and power, then we need the right
strategy as we enter the second industrial revolution. On that
note I'd like to share with you Ray Kurzweil's seven-point
program for creating intellectual property in the 1990's.
What's the first thing you should do when you begin the task
of creating a new invention, and I mean invention in the broadest
sense as any intellectual creation with value? Well, I'll tell
you what I always do. The first step in the process of invention
obviously should be to write the advertising brochure. I have in
fact done that in each major project that I've undertaken. I'll
not only write the brochure, but I'll engage a graphic designer
and have it printed up. Now it doesn't hurt to have a
nice-looking brochure when you're looking for investors in your
project, but that's not the reason I start by writing the
brochure. The reason is that, if you write the advertising first,
it forces you to articulate clearly who the thing is for and why
you're doing it. Inventing is different from science. It's even
different from engineering. The objective is not to create a
device that demonstrates a new and interesting scientific
principle, although that might be involved. The objective is not
to create a device that implements a new and more efficient
approach to engineering, although that too might be involved. The
objective is to create a device or a process or an idea that
brings some benefit to someone, hopefully a lot of someones.
Writing the brochure as your first step is harder than it
may seem. It forces you to articulate clearly the features, the
benefits , and the beneficiaries. Once you've written and printed
it, show your brochure to potential buyers. If they don't
immediately get excited and besiege you for a delivery date, then
you're barking up the wrong tree.
This brings me to my second point. Now that you've
identified the beneficiaries of your invention and you've gotten
them excited about it, let them create the device for you. I
mean, they want it so much, let them invent it.
Let me give you some examples. In the 1970's I was working
on the Kurzweil Reading Machine, which as you know is a device
that scans printed material, recognizes the printed letters in
any type font, and then reads the print out loud in a synthetic
voice. We needed funding, of course, so we approached the
National Federation of the Blind. They said, okay, we'll help you
raise the money you need, but you have to put us in charge of the
human factors design and the user controls, and involve us in
every facet of the engineering. Well, I wasn't expecting that
request, but I was in no position to argue, so I said, "sure,
come on down."
Well, the blind engineers of the National Federation of the
Blind moved in and worked intimately with us on every facet of
the development, and the design came out quite different from
what we had originally expected, and, as it turned out, it was
very well accepted by blind consumers. With the intended users
having been intimately involved in every stage of the design
process, it anticipated the users' needs in ways that we as
well-intentioned but sighted engineers could never have
anticipated. While the Kurzweil Reading Machine has now gone
through six generations of technology, the basic human-factors
strategy that was created by the blind scientists and engineers
who worked with us remains the same today as it was almost
seventeen years ago in 1976.
I'll give you one example. We were going to put little
Braille labels on all of the user controls so that a new user
would know which control was which. But one of the NFB engineers
said that it would be very annoying to feel these Braille labels
hundreds of times a day, every day. So I asked him how a new user
could identify the controls without Braille labels. He suggested
putting another prominent button on the panel, which he called
the "nominator" key, and, if a user wanted to identify a control,
he would simply push the nominator key, then hit another key, and
that key would announce its name and describe its function. Then,
after using the nominator key to explore the keyboard for a few
days, a user would know where all the keys were and would not
need to feel these annoying Braille labels hundreds of times
every day. That made a lot of sense when we heard it, but since
we were not the intended users of the invention, it is an insight
that we never would have realized on our own.
With my music company, we did the same thing. All of the
engineers are musicians, many of them quite accomplished, because
there is really no other way to be sensitive to the nuances of
sound and the subtle interactions of feel and response in a
musical instrument. In my speech recognition company, Kurzweil
Applied Intelligence, our voice-activated products for doctors
have been designed by physicians, and our voice-activated
products for the hands-impaired have had significant involvement
by their handicapped users. There is really no way truly to
understand the needs and desires of your users without deeply
involving them in every stage of the invention process.
Now that we're talking about the group that is involved in
the creation of an invention, my third point is to understand the
dynamics of this group process. Inventing today is not a matter
of a single crazy inventor disappearing into his or her basement
and emerging years later with a breakthrough. Actually it's a
matter of a group of crazy inventors disappearing into their
basement. Inventing today is an interdisciplinary process. The
development of speech recognition, for example, requires
linguists, speech scientists, signal-processing experts,
psychoacousticians, circuit designers, programmers, and other
specialists who can work together and, perhaps most important,
understand each other's terminology.
The importance of this last point was first recognized by
Norbert Wiener, who wrote in 1948, the year I was born, in his
classic book Cybernetics: "Since Leibniz there has perhaps been
no man who has had a full command of all the intellectual
activity of his day.... There are fields of scientific work...
which have been explored from the different sides of pure
mathematics, statistics, electrical engineering, and
neurophysiology; in which every single notion receives a separate
and different name from each group, and in which important work
has been triplicated or quadruplicated, while still other
important work is delayed by the unavailability in one field of
results that may have already become classical in the next
field." From my own experience, this meshing of diverse
disciplines is perhaps the most crucial element in developing
interdisciplinary technology, which is becoming most of
technology.
So after assembling our group of experts and our group of
users, which hopefully are the same people, we throw out all of
the words that we all came in with and create our own new
terminology. By the way, this has advantages in terms of
proprietary technology protection, because, if anyone overhears
our conversations, they have no idea what we are talking about.
Then--and this is now my fourth point--to encourage thinking
out of the box, I'll assign a linguistics problem, not to the
linguists, but to the signal processing engineers, and a signal
processing problem to the linguists. This doesn't always work,
but it is possible in this way to achieve creative solutions to
problems that could hardly be attained in any other way.
They say that a wise man can learn more from a fool than the
other way around. So my fifth point is that you can learn more
from failure than from success. Success has a way of covering up
mistakes. When you're successful, you get the mistaken impression
that you must have done everything right. As it turns out, you
just did the right things right. You can, of course, turn your
failures into successes, not only by considering them growth
experiences, but by assessing in detail the lessons to be
learned, which you'll find is much easier to do when you're not
successful.
But more important, you should endeavor to turn your
successes into failures. In other words, look for the failure in
success. For example, in our apparently successful Gulf war of a
couple of years ago, many of our weapons didn't work. The much
vaunted Patriot missile, being a heat-seeking missile, was very
successful in blowing up the hot launchers of incoming Scuds.
Unfortunately, it was not the launchers that needed to be blown
up, but the warhead, which in most cases had already separated
from the launcher prior to the launcher's being destroyed. Now it
didn't matter that much a couple of years ago, because the Scuds
were so inaccurate. But if we fail to look for this failure
amidst the success, it will matter the next time.
Point six is to design for marketability. This goes beyond
identifying who the users will be and why they will want this new
technology. You need to understand what unique characteristics
your invention will have and why this is a well-leveraged fit for
your target application.
Akio Morita, the well-known Chairman of Sony, who has made
something of a career out of criticizing American business
practices, provides an instructive example. The transistor,
invented in the early 1950's at Bell Laboratories, certainly
represented a primary technological breakthrough and today fuels
a revolution that has transformed most industries. A transistor
has two properties: it amplifies electrical signals, and it's
small. So obviously its primary market is going to be hearing
aids. And that was in fact the prevailing view of the Bell
Laboratories scientists at the time.
Sony thought otherwise and became the first Japanese company
to license the patent, in 1953, with the idea of developing a
transistor-based radio. "Why bother with that?" they were asked.
After all, you'll still have a very large speaker, not to mention
all that furniture. But maybe you can replace the big speaker
with a little speaker. But with a little speaker all the people
gathered around the radio won't be able to hear it. But maybe it
is good enough to have one person use it at a time. But a radio
is too expensive to devote to just one person.
As the history is written, Sony applied a Gordian solution
to this knotty and circular thinking and brought the transistor
radio to market. Their motto became "one person, one radio."
Junior may not want to listen to the same music as grandpa.
The transistor radio was a hit, and the transistor was off
and running. And the Japanese consumer-electronics industry was
off and running as well. The marketing creativity involved in
rethinking the purpose of a radio in light of the new
technological capabilities provided by the transistor was at
least as important as the technology itself. It involved
considering several variables at the same time, and not just the
single issue of the transistor's size.
And finally here is a foolproof method to create your new
technology. After you've come up with the breakthrough concept
that will revolutionize some industry or other; you've written
the advertising brochure, which as I said, should be your first
step; you've printed it up; you have some customers clamoring for
it; you've got your investors; you've assembled your experts, who
are also, of course, the potential users of your new technology;
you've thrown out all the common technical terminology and
devised your own; now here's how you make the whole thing work.
Sit down or lie down and then imagine, if it existed, what
would it look like? How would it work? Imagine yourself at a
conference four years from now, and you're explaining how you and
your team of experts accomplished your goal, how it works, and
how you solved problems that at the beginning seemed so
intractable. Let your thoughts wander. Indulge yourself in this
fantasy as you're falling asleep. When you wake up the next
morning, you'll know what to do. Or at least you'll think you do,
until you run into some roadblock, something your fantasy failed
to consider. But then just use the same procedure again. From my
own experience I can tell you this technique usually works. But,
if it doesn't, you probably won't realize it right away. After
all, no one said that entrepreneurship was free of risk.
So on that note I'd like to let our imaginations wander a
bit and consider the next decade and the next century. Let's
imagine that the future exists. What does it look like? How does
it work?
The first concept we need to consider is Moore's law.
Moore's law is the driving force behind a revolution so vast that
the entire computer revolution to date represents only a minor
ripple of its ultimate implications. Moore's law states that
computing speeds and densities double every eighteen months. In
other words, every eighteen months we can buy a computer that is
twice as fast and has twice as much memory for the same cost.
Remarkably, this law has held true since the beginning of this
century, from the mechanical card-based computing technology of
the 1890 census, to the relay-based computers of the 1940's, to
the vacuum-tube-based computers of the 1950's, to the
transistor-based machines of the 1960's, to all of the
generations of integrated circuits that we've seen over the past
twenty-five years.
If you put every calculator and computer for the past 100
years on a logarithmic chart, it makes an essentially straight
line (several straight lines, actually). Computer memory, for
example, is about 16,000 times more powerful today for the same
unit cost as it was about twenty years ago. Computer memory is
150 million times more powerful for the same unit cost than it
was in 1948, the year I was born. If the automobile industry had
made as much progress in the past forty-five years, a car today
would cost about a hundredth of a cent and would go faster than
the speed of light.
Moore's law will continue unabated for many decades to come.
We have not even begun to explore the third dimension in chip
design. Chips today are flat, whereas our brain is organized in
three dimensions. We live in a three-dimensional world; why not
use the third dimension? Improvements in semiconductor materials,
including the development of superconducting circuits that do not
generate heat, will enable the development of chips, or I should
say cubes, with thousands of layers of circuitry, which, when
combined with far smaller component geometries, will improve
computing power by a factor of many millions. There are more than
enough new computing technologies being developed to assure a
continuation of Moore's law for a very long time.
The implications of this geometric trend can be understood
by recalling the legend of the inventor of chess and his patron,
the emperor of China. The emperor had so fallen in love with his
new game that he offered the inventor a reward of anything he
wanted in the kingdom. "Just one grain of rice on the first
square, your Majesty."
"Just one grain of rice?"
"Yes, your Majesty, just one grain of rice on the first
square, and two grains of rice on the second square, four on the
third square, and so on."
Well, the emperor immediately granted the inventor's
seemingly humble request. One version of the story has the
emperor going bankrupt because the doubling of grains of rice for
each square ultimately equaled eighteen million trillion grains
of rice. Another version has the inventor losing his head.
It's not yet clear which outcome we are headed for, but
there is one thing that we should take note of. It was fairly
uneventful as the emperor and the inventor went through the first
half of the chessboard. After thirty-two squares the emperor had
given the inventor about eight billion grains of rice. That's a
reasonable quantity of rice--it's about one field's worth--and
the emperor did start to take notice. But the emperor could still
remain an emperor, and the inventor could still retain his head.
It was as they headed into the second half of the chessboard that
at least one of them got into trouble. So where do we stand now?
Well, there have been just about exactly thirty-two doublings of
performance since the first operating computers were built in the
1940's. So where we stand right now is that we've just finished
the first half of the chess board. And indeed people are starting
to take notice. As we head into the rest of the nineties and the
next century, we are heading into the second half of the
chessboard, and that is where things start to get interesting.
Let's take a moment to examine a few of the things we're
likely to see as we go through the second half of the chessboard.
One of my companies, Kurzweil Applied Intelligence, is devoted to
speech recognition technology. So I'd like to start out by
sharing with you some scenarios that deal with this technology.
The state of the art today is that large vocabulary speech
recognition systems can recognize very large vocabularies of
50,000 words, which is pretty much anything you might want to
say. These systems are speaker-independent, which means they can
recognize anyone without having been trained on that person's
voice. The primary limitation is that you need to speak in what
is called discrete speech, that...is...with...brief...pauses...
between...words...like...this.
These systems are more popular than many people realize. For
example, if you have the misfortune of ending up in one of our
nation's emergency rooms, there is a ten percent chance that your
patient record will be created by the doctor dictating the report
directly to a large-vocabulary speech-recognition system that my
company created, called VoiceEM for Voice Emergency Medicine.
We already know how to recognize continuous speech, which is
the type of speech I am creating right now, but it requires
substantially more powerful personal computers. Well, Moore's law
will take care of that, and we expect to see accurate large-
vocabulary continuous-speech recognizers emerge in the next two
to three years. As we go into the next century, which is only
about seven years from now, we'll see very accurate continuous-
speech recognition integrated with a broad variety of other
artificial-intelligence technologies. Translating telephones, for
example, which combine large-vocabulary continuous-speech-
recognition technology, with language-translation software and
speech synthesis, will be demonstrated later on in this decade,
Translating telephones will become a routine telephone service
during the first decade of the next century.
At Kurzweil Applied Intelligence we are also working on
developing listening machines for the deaf, which will convert
human speech into a visual display of text, essentially the
opposite of reading machines for the blind. So a deaf person
listening to this lecture could follow along with real-time
subtitles which could be built into a pair of eyeglasses. We
expect to see listening machines for the deaf introduced later on
in this decade. By 2010 we may all wish to use them, since these
real-time subtitles can include immediate translation into other
languages and other commentaries.
We'll also see speech recognition integrated with problem-
solving software to provide knowledge navigators, essentially
computerized personal assistants built into your personal
computer that will talk to you with two-way voice communication
and that will help you find information and solve problems. For
example, during the first decade of the next century, you might
ask your personal knowledge navigator to recommend the optimal
form of financing for a new marketing program. It would access
your company's on-line information systems to get details about
the program, access national financial markets using cellular
communication to on-line financial information services to obtain
the latest rate information on different financial instruments,
call your Vice President of Marketing personally to get her level
of confidence in the marketing projections, and then assemble the
information into a presentation. It would do this in seconds if
it wasn't for the fact that it took a week and a half to get
through to the one human being involved.
By 2010 your standard personal computer will come in a
variety of sizes, from wristwatch size to large wall-sized
displays. Unrestricted speech recognition will be a primary input
modality. Your personal computer will perform a broad variety of
functions. It will be your wristwatch. It will be your telephone,
which will include high-resolution moving pictures. It will be
your radio and television, which will be high-definition and
interactive. It will also provide you with virtual books and
magazines, which will also be interactive, will include moving
pictures, and will have display qualities comparable to high-
quality paper books today.
On that note, let's talk a little bit about technology for
the disabled. Reading machines for the blind have certainly
benefited from Moore's law. I examined this issue recently with
regard to the Kurzweil Reading Machine. The current model, the
Reading Edge, has eighty times the speed, contains 128 times the
memory, and has a comparable improvement in overall performance
as compared to the original model seventeen years ago. The
Reading Edge today is now one twenty-sixth the price of the
original Model 1 as measured in constant dollars. If we thereby
regard the machine as now providing approximately eighty times
the performance for one twenty-sixth the price, that's an overall
improvement in price-performance of about 2,000 to 1. Now 2,000
is two to the eleventh power, which means we have doubled
price-performance eleven times. That's exactly what you would
predict from Moore's law in a seventeen-year period. And, of
course, Moore's law will continue to improve all aspects of
reading machine price and performance in the years ahead.
Just recently, two-dimensional scanning chips have emerged,
which can scan a full page of text with 300-spot-per-inch
resolution without any moving parts. These two-dimensional
scanning arrays, which have over five million pixels, are
prototypes and are therefore expensive. But within a few years
these chips will permit the development of pocket-sized scanners,
the size of a small camera, that can snap a full page instantly.
Thus, before the decade is out, a full print-to-speech reading
machine will fit in your pocket. You'll hold it over the page to
be scanned and snap a picture of the page. All of the electronics
and computation will be inside this small, camera-sized device.
You'll then listen to the text being read from a small speaker or
earphone. You will also be able to snap a picture and read a
poster on a wall or a street sign or a soup can or someone's ID
badge or an appliance LCD display and many other examples of
real-world text. This reading machine will cost less than a
thousand dollars and will ultimately come down to hundreds of
dollars.
Algorithmic improvements will also provide capabilities to
describe non-textual material such as graphs and diagrams and
page layouts. These devices will also provide on-line access to
knowledge bases and libraries through the information
superhighway, which I will comment on further in a moment. By the
end of the first decade of the next century, the intelligence of
these devices will be sufficient to provide reasonable
descriptions of pictures and real-world scenes. These devices
will also be capable of translating from one language to another.
The scanning sensors of the future reading machine will
ultimately become very small and could be built into a pair of
eyeglasses. The advantage of doing this is that it would allow
the user to control the direction of scanning through motion of
the head in the same way that a sighted person does. Once these
devices can provide reasonably intelligent descriptions of
real-world scenes, they will evolve into navigation aids.
I will point out that access to the world of print has been
a more important issue than navigation. Braille, of course, is a
vitally important technology in that it provides access to the
world of literacy for both reading and writing. It does, however,
have the limitation that only a small percentage of books and
topical literature are available in this alternative medium.
Recorded material has the same limitation. Thus reading machines
have provided the opportunity to overcome a principal handicap
associated with the disability of blindness: access to ordinary
print. But having worked with many blind persons over the past
twenty years, I have come to realize that navigating within a
building, or around the world, is not a handicap for a blind
person who has been trained with advanced navigational skills.
Until a navigation device can provide a level of intelligence
sufficient to be truly helpful, the most useful navigational
technology will continue to be the modern lightweight cane. There
have already been electronic navigation devices developed, but
they have not yet proved useful. Unless such a device
incorporates a level of intelligence at least comparable to a
seeing-eye dog, it is not of much value.
General purpose artificial vision is now being developed for
robots and is in an early stage, although progress is rapidly
being made. Today, robotic factory inspectors can outperform
human inspectors in many visually demanding tasks. Vision has
lagged other developments in artificial intelligence because of
the enormous flows of data required to process visual information
intelligently. With the advent of massively parallel computing
and the continuing progress made through Moore's law, this
difficulty is gradually being overcome.
Such a combination reading machine-navigation aid will be an
assistant that will describe what is going on in the visible
world. The blind user could ask the device (verbally or using
appropriate manual commands) to elaborate on a description, or he
could ask it questions. These artificial visual sensors need look
not only forward; they may as well look in all directions. And
they ultimately will have better visual acuity than human eyes.
Everyone--visually impaired or not--may want to use them.
Persons with other disabilities will benefit from the
continuing advance of computer technology as well. I mentioned
earlier the speech-to-text sensory aid for the deaf, which I
believe will be introduced within the next several years and will
become a popular device by the end of this decade. A principal
physical handicap is paraplegia, the loss of control over the
legs. The most common prosthetic aid for this disability is the
wheelchair, which has changed only in subtle ways over the past
two decades. It continues to suffer from its principal drawback,
the inability to negotiate doorways and stairs. Although federal
law now requires most public buildings to accommodate wheelchair
access, the reality is that access to persons in wheelchairs is
still severely restricted. By the end of this decade we will see
the first generation of effective exoskeletal robotic devices,
called powered orthotic devices, which will restore the ability
of paraplegic (and in some cases quadriplegic) persons to walk
and climb stairs.
Overcoming the handicaps associated with disabilities is an
ideal application of artificial intelligence technology. In the
development of intelligent computers, the threshold that we are
now on is not the creation of cybernetic geniuses. That will come
later. Instead, we are today providing computers with narrowly
focused intelligent skills, such as the ability to make decisions
in such areas as finance and medicine and the ability to
recognize patterns such as printed letters, human speech, blood
cells, and land terrain maps. Most computers today are still
idiot savants, capable of processing enormous amounts of
information at very high speed and with great accuracy, but with
relatively little intelligence.
When one considers the enormous impact that these idiot
savants have had on society, the addition of even sharply focused
intelligence will be a formidable combination. It will be
particularly beneficial for the disabled population. A disabled
person is typically missing a specific skill or capability but is
otherwise a normally intelligent and capable human being. There
is a fortuitous matching of the narrowly focused intelligence of
today's intelligent machines with the narrowly focused deficit of
most disabled persons. Our primary strategy in developing
intelligent computer-based technology for sensory and physical
aids is for the focused intelligence of the machine to work in
close concert with the much more flexible intelligence of the
disabled person himself.
There are an estimated twenty million disabled Americans.
Many are not able to learn or work up to their capacity because
of technology that is not yet available or technology that is
available but not yet affordable or pervasive and because of
negative public attitudes toward disabled persons. As the reality
changes, the perceptions will also change, particularly as
formerly handicapped persons learn and work successfully
alongside their non-disabled peers. By the end of the first
decade of the next century, I believe that we will come to herald
the effective end of handicaps.
Another trend worth commenting on is the information
superhighway. We now realize that the information superhighway
will be here a lot sooner than we originally anticipated. We
originally thought that we would have to wire optical fiber into
every home and office. That massive new physical infrastructure
would have taken twenty years to put in place. We now realize
that we need only place optical fiber to within about one mile of
its final destination. A couple of other technologies that do not
require a physical infrastructure can then be used to carry the
information for that critical last mile. For example, existing
coaxial cable can also provide ten billion bit per second
point-to-point communication for short distances. There is also a
new wireless communication using frequencies at the microwave
range or higher that can also provide very high bandwidth
communication, again only for short distances, but the last mile
is all we need.
It turns out that this last mile of communication represents
about ninety percent of the infrastructure that we originally had
contemplated. So eliminating this last mile of wiring means we
can put the information superhighway in place in about four years
instead of twenty. This next wave of communication technology
will be here much sooner than we thought. That will be another
major step in the ultimate real

  
ization of McLuhan's vision of the
Global Village.
There are many other emerging trends we could talk about,
but, in the time I have remaining, I would like to touch on one
other scenario. This is a scenario that has not been extensively
discussed in the popular literature but is a development I am
convinced will occur within the lifetimes of most of the people
in this room. It is a development that I have spent time
researching and in fact am writing a book about.
As you are probably aware, we can simulate the functions of
human neurons in software. These computerized neural nets, as
they are called, have become increasingly popular, particularly
in pattern recognition systems. We use them, for example, in our
speech recognition systems. Today's simulated neurons are
somewhat simplified from the real thing. An actual neuron is a
complex computer, a hybrid analog-digital computer as it turns
out. But it is feasible to simulate the full complexity of human
neurons, and some of the more advanced neural nets now being
developed provide reasonably realistic simulations of true neuron
function.
A neural net, and this includes the human brain, uses a
radically different computational paradigm from the computers
we're used to. A typical computer does one thing at a time, but
does it very quickly. A neural net, particularly the human one,
is very slow, but every part of the net is computing
simultaneously. We have about 100 billion neurons, and each of
these neurons has an average of 1,000 connections to other
neurons. Each of these connections can perform computations
simultaneously, so that's about 100 trillion computations being
performed at the same time. There are many subtleties to neural
nets, but our computer-based neural net simulations have been
limited primarily by two factors: the number of neural
connections that can be simulated in real time and the capacity
of computer memories. Although human neurons are very slow, in
fact about a million times slower than electronic circuits, their
massive parallelism more than makes up for it. Although each
interneuronal connection is capable of performing only about 200
computations each second, with 100 trillion computations being
performed at the same time, that comes to about twenty million
billion calculations per second, give or take a couple of orders
of magnitude.
How does that compare to the state-of-the-art in human-
created technology? Specialized neural computers have been
developed that can simulate neurons directly in hardware. These
operate about a thousand times faster than neural networks
simulated in software on conventional PC's. One recent model
processes about two billion connections per second. That may seem
like a lot, but it still about ten million times slower than the
human brain. Again we look to Moore's law, which projects that
our personal neural computers will match both the memory and the
computational ability of the human brain, twenty million billion
calculations per second, by around the year 2020.
Now matching the raw computing speed and memory capacity of
the human brain, even if implemented in massively parallel neural
nets, will not automatically result in human-level intelligence.
The architecture and organization of these resources are at least
as important as the capacity itself. There is, however, a source
of knowledge that we can tap to accelerate greatly our
understanding of how to design intelligence in a machine, and
that is the human brain itself. By probing the brain's circuits,
we can essentially copy, that is to say, reverse engineer, a
proven design, one that took its original designer several
billion years to develop.
Just as the Human Genome Project, in which the entire human
genetic code is being scanned, recorded, and analyzed to
accelerate our understanding of the human biogenetic system, a
similar effort to scan and record the neural organization of the
human brain can help provide the templates of intelligence. As it
becomes clear that we are approaching the computational ability
to simulate the human brain--we're not there today, but we will
be there early in the next century--I believe that such an effort
will be initiated. Indeed, this effort has already begun.
For example, an artificial retina chip created by a small
company called Synaptics, is fundamentally a copy of the neural
organization, implemented in silicon of course, of not only the
human retina, but its visual processing layer as well.
High-speed, high-resolution magnetic resonance imaging (MRI)
scanners are already able to resolve individual somas (neuron
cell bodies) without disturbing the living tissue being scanned.
More powerful MRI's are being developed that will be capable of
scanning individual nerve fibers that are only ten microns in
diameter. Eventually we will be able automatically to scan the
presynaptic vesicles that are the site of human learning.
This suggests two scenarios. The first is to scan portions
of a brain to ascertain the architecture of interneuronal
connections in different regions. The exact position of each
nerve fiber is not as important as the overall pattern. With this
information we can design simulated neural nets that will operate
similarly. This process will be rather like peeling an onion as
each layer of human intelligence is revealed. That is essentially
what Synaptics has done. They copied the essential analog
algorithm of center-surround filtering found in the early layers
of mammalian neural image processing.
A more difficult but also ultimately feasible scenario will
be noninvasively to scan someone's brain to map the locations,
interconnections, and contents of the somas, axons, dendrites,
presynaptic vesicles, and other neural components. Its entire
organization could then be re-created on a neural computer of
sufficient capacity, including the contents of its memory.
We can peer inside someone's brain today with MRI scanners,
which are increasing their resolution with each new generation of
this device. There are a number of technical challenges in
accomplishing this, including achieving suitable resolution,
bandwidth, lack of vibration, and safety. For a variety of
reasons it will be easier to scan the brain of someone recently
deceased than of someone still living. It is easier to get
someone deceased to sit still for one thing, but noninvasively
scanning a living brain will ultimately become feasible as MRI
and other scanning technologies continue to improve in resolution
and speed.
In fact, the driving force behind the rapidly improving
capability of noninvasive scanning technologies such as MRI is
again Moore's law, because it requires massive computational
ability to build high resolution three-dimensional images from
the raw magnetic resonance patterns that an MRI scanner produces.
The increasing computational ability provided by Moore's law will
enable us to continue to improve the resolution and speed of
these noninvasive scanning technologies.
You might feel that I am veering off into the realm of
science fiction, so let me say a word about the nature of this
projection. If someone a hundred years ago were to have attempted
a prediction of this past century, he would not have been able to
predict most of the major technologies that have shaped it, such
as computers, Moore's law, radio, television, atomic energy,
lasers, bio-engineering--indeed most of electronics--just to
mention a few. And indeed all but a handful of futurists at the
time were unable to foresee any of these developments.
The century to come will also undoubtedly contain many such
breakthroughs that we would have difficulty envisioning or even
comprehending today. But the projection I am making now does not
contemplate any such breakthrough. It is a modest extrapolation
of current trends and is based on technologies and capabilities
that we can touch and feel today. We can't do it yet, but we can
describe right now how this capability can be achieved.
The ability to download your mind to your personal computer
will raise some interesting issues. I'll mention just a few.
There's the philosophical issue. When people are scanned and then
re-created in a neural computer, people will wonder just who are
those people in the machine?
The answer will depend on who you ask. If you ask the people
in the machine, they will strenuously claim to be the original
persons, having lived certain lives, having gone into a scanner
here, and then having awakened in the machine there. They'll say,
"Hey, this technology really works. You should give it a try." On
the other hand, the original people, who were scanned, will claim
that the people in the machine are impostors, people who just
appear to share their memories, histories, and personalities but
who are definitely different people.
There's the psychological issue. A machine intelligence that
has been derived from human intelligence will need a body. A
disembodied mind will quickly become depressed. Ironically, it
will take us longer to recreate our bodies than it will take us
to recreate our minds. We are making exponential progress in
providing the computational resources to simulate intelligence
with the linear passing of time. But we are only making linear
progress with the linear passing of time in robotic technology.
So it will take us longer to recreate the suppleness of our
bodies than the intricacies and subtleties of our minds.
There's the ethical issue. Will it be immoral, or even
illegal, to cause pain and suffering to your computer program?
Will it be illegal to turn your computer program off? Perhaps it
will be illegal to turn it off only if you have failed to make a
recent backup copy. Maybe they'll want to turn us off.
No one worries much about these issues today, but our most
advanced programs today are comparable to the minds of insects.
But when our programs are of the same complexity and subtlety as
a human mind, which will be the case in a few decades, and when
our computer programs have, in fact, been derived from human
minds or even portions of human minds, this will become a
pressing issue. And, of course, there will be the usual line-up
of economic and political issues. We are likely to see the
Luddite issue, the concern over the negative impact of machines
on human employment, become of intense interest once again.
Before Copernicus, our speciecentricity--I made up that
word, by the way, in case you never heard it before--was embodied
in a view of the universe literally circling around us as a
testament to our unique and central status. Today our belief in
our own uniqueness is not a matter of celestial relationships,
but rather of our intelligence. Evolution is seen as a billion-
year drama leading inexorably to its grandest creation: human
intelligence. The specter of machine intelligence's competing
with that of its creator will once again threaten our view of who
we are.
We have now peered about 80% of the way through the chess
board. We might wonder what happens at the end of the chess
board. In the year 2040 we will reach the sixty-fourth. square.
In my view Moore's law will still be going strong. Computer
circuits will now be grown like crystals with computing taking
place at the molecular level.
By the year 2040, in accordance with Moore's law, your
state-of-the-art personal computer will be able to simulate a
society of 10,000 human brains, each of which would be operating
at a speed 10,000 times faster than a human brain. Or,
alternatively, it could implement a single mind with 10,000 times
the memory capacity of a human brain and 100 million times the
speed.
What will the implications of this development be? Well,
unfortunately, I've run out of time, so you'll have to invite me
back. But I'll leave two thoughts with you, written by people who
did not have the benefit of a voice-activated word processor. Sun
Tzu, Chou dynasty philosopher and military strategist, wrote in
the fourth century BC: "Knowledge is power and permits the wise
to conquer without bloodshed and to accomplish deeds surpassing
all others." And Shakespeare wrote: "We know what we are, but
know not what we may be."
Thank you very much.



[PHOTO: Portrait. CAPTION: T. V. Cranmer]

EMERGING RESEARCH GOALS IN THE BLINDNESS FIELD
by T. V. Cranmer

From the Editor: Dr. Cranmer chairs the Research and
Development Committee of the National Federation of the Blind. He
is also an inventor and an expert in technology issues in the
blindness field. He was the first speaker on the late morning
panel which addressed the conference on November 4. Here is what
he had to say:

It is a distinct pleasure to be here. I have been attacked
by my computer, have been humbled by it as late as yesterday. I
purchased WordPerfect 6.0 a few days ago and installed it. (It
took sixteen megabytes on my hard disk.) I thought to myself,
I'll wait until I get back from the conference to look into this
new program; but yesterday morning, having a couple of hours to
while away, I couldn't resist, and I tried a new feature, the
grammar checker. That's where the humility comes in. I ran
Grammatik, and in the first paragraph it said that my sentence
constructions were awkward, that I used the incorrect tense (or
was it tenses?), that I was using the passive voice, and that the
material was not appropriate to the kind of talk I was making. I
wondered if I should come at all. So I sought solace in a usually
reliable place; I explained my humiliation to my wife. And in her
usual way she rallied to the occasion and said, "Well, you write
the way you talk." Nevertheless, I am going to plow forward and
read my comments, in which I have imbedded some humor and whimsy.
I hope that they are not too well disguised.
What better thing to do on a Sunday afternoon than discover
a tide of change in the way humans communicate? This thought gave
little comfort when, not many Sundays ago, I went with my sister
to K-Mart to buy some simple necessities. Stopping near a
counter, I touched a shrink-wrapped package and asked, "What's
this?"
"Turn it around so I can see," Irma directed. I rotated the
package ninety degrees so she could get a good view. "That's a
bath set," she said, "with a toilet seat cover, tank cover, bath
mat, and matching towel and wash cloth."
"Is that what it says?" I asked, with some surprise at the
completeness of the identification of the package contents.
"No," she replied, "that is what the picture shows." [And I
thought: Aren't graphics just for computers?]
Near by on the same counter there were other packages
containing just towels, towels and wash cloths, bath mats, and so
on--all identified by pictures alone.
These things were not on my shopping list, so we moved on to
the rack of bagged candies, where I made the only purchase of the
day--a mixed bag of butterscotch and mint flavored taffy kisses.
Returning to our car, I opened the candy and invited Irma to try
a piece and at the same time pick a mint taffy for me. She handed
me the kiss.
"Does it say 'mint'?" I asked.
"No; it just has a leaf on it," she said. "The butterscotch
has a cow on it."
That's when it hit me: the moment of truth! While I had been
chained by love to my disk for the past four decades, the rest of
the world had been accelerating a swing away from words to
pictures to convey information, away from text toward icons.
I learned a lot more during the ride back to the office.
Traffic lights, at least in my town, don't bear the words "stop,"
"caution," and "go," depending instead on the colors red, yellow,
and green. What about drivers who are color blind? The stop light
is on top; the go light is on the bottom.
How about traffic signs? The stop sign is a red octagon.
Mile markers are green vertically oriented rectangles with
rounded corners, and highway exit signs are green horizontal
rectangles. If that's not enough, diamond-shape signs signify
danger, and round ones stand for railway crossings.
Like any responsible investigator, I contacted our state
highway department to verify the accuracy of what I had been told
about traffic lights and signs and to ask why different colors
and shapes were used. The state official informed me that it was
so that "people won't have to read them if they aren't
interested."
Motorists: can you recall when the image of a horn first
adorned the horn button on your car? When did the picture of a
smoking cigarette first identify the cigarette lighter?
It was at about this point in my new awareness that I
flashed-back to PC-Magazine for July, where the editor
proclaimed:
Coming Soon:
WordPerfect 6.0 is here. The question is, should you invest in
what might be one of the last significant DOS programs, or is it
time to switch to a Windows word processor? Hold on, PC-Magazine!
Is Windows our only choice? IBM just might offer an alternative
environment--not that it is guaranteed to provide any advantage
to the blind.
In either case I wonder what use word processors will have
in the world dominated by graphic images. I ponder but won't
pursue here the question, does our emphasis on pictorial
information have a direct connection with the reported increase
in illiteracy in our society?
For those of you who require more evidence to support the
notion that we may now be approaching the end point of a swing
away from words toward symbolic representations, I invite your
attention to developments in the musical instruments industry.
Compare the classical organ and its modern equivalent, the
electronic synthesizer. The pipe organ has stops clearly labeled
with names like flute, recorder flute, oboe, piccolo, violin, and
viola. The synthesizer may have similar voices labeled with
pictures of the instruments they are supposed to imitate but
without text labels. Furthermore, these electronic instruments
may have a dynamic visual display that includes pictures of the
instrument or instruments currently being synthesized.
It is just a matter of time till photos and illustrations
will dominate the print publishing industries. It is no longer
possible to find a book that is not illustrated. Many textbooks
for primary grades devote as much space to pictures and
illustrations as to text.
So the challenge to the blind and to the researchers in the
blindness field emerges: How are we to decode the visual
information that surrounds us and back-translate the messages
into words? How are we to extract the message in the graphic and
say it in words?
The interpretation of visual information may be more
involved than it first appears. To hark back to my earlier
reference to the picture of a leaf on the peppermint taffy, it
clearly is not enough to say "leaf" unless the blind observer is
also given the information that the leaf is on the wrapper of a
candy kiss. Only then will he make the connection and conclude
that the leaf is from a sprig of mint. For all the same reasons,
informing a blind computer user of current information on his
graphical user interface requires much more than speaking the
identity of the icons, dialog boxes, radio buttons, et cetera.
More often than not, it will also be necessary to know the
juxtaposition of certain depictions, what is currently selected
(active) and what is required of the blind operator to accept the
current display or to change conditions.
Microsoft Corporation has demonstrated initiative in solving
some of the problems resulting from their emphasis on the Windows
computer environment. Mr. Greg Lowney has shared information with
the NFB on his work at Microsoft and will surely discuss the
progress he has made at some point in this conference. IBM has
also addressed the issue of access by blind people to their
Program Manager graphical user interface. Blazie Engineering is
expected to make an important contribution to mastering the
Windows program with their product, Windows Master, now in
development. This ingenious approach allows the blind computer
user to control and otherwise operate a computer running Windows,
using only a Braille 'n Speak and the Windows Master software.
You may coax Mr. Blazie into divulging more information about
this unannounced product.
The success of the work of Microsoft, IBM, Blazie
Engineering, and others must depend on strategies for
interpreting visual information through speech and Braille. The
NFB Research and Development Committee and other members of the
Federation will play an important role in developing these
strategies.
One member, T. V. Raman, is a doctoral student at Cornell
University, a recent NFB scholarship winner, and a member of our
research discussion group on the Internet. His work can be
glimpsed through two brief quotes from a paper posted to his file
area on the computer science archive at Cornell:

I have developed AFL, a language for expressing
audio formatting rules. AFL is used to produce audio
renderings of electronic documents with heavy
mathematical content. The system surpasses current-day
reading machines in both the type of documents handled
and quality of audio renderings.
My research on audio formatting, though motivated
by the need to generate audio documents, is more
generally applicable to the areas of user interface and
information access in audio. [or Braille]

After taking his Ph.D from Cornell in December, Raman hopes
to extend his work to include audio formatting of complex
graphical computer displays. Raman's papers can be retrieved via
the Internet using file transfer protocol to ftp.cs.cornell.edu
and changing to the sub directory /pub/raman.
Once the information in a graphical display has been
extracted and converted to language elements, some will want to
forego use of a speech synthesizer in favor of a Braille display.
The amount and kind of processing required to achieve a
satisfactory Braille format will depend on the size of the
Braille display available. We are presently limited to a single
line of refreshable Braille as the output of a computer, reading
machine, or consumer product. This is a serious limitation. It is
difficult, if not impossible, using a single line of Braille to
convey page layout and other format characteristics that
contribute to the reading process.
Inventors and researchers alike are now responding to the
need for a larger and affordable Braille display. While none have
as yet found the right combination of materials and technologies
to produce a full-page, refreshable display, there are numerous
signs of serious work and some progress worth noting.
In recent years several materials and mechanical devices
have competed for the role of the technology of choice to make
refreshable Braille. The piezoelectric ceramic bimorph has
emerged as the clear winner. This is the technology used by
Telesensory in the U.S., Tieman of Holland, Tiflotel in Italy,
and others to produce one-line displays. Affordable multi-line
piezoelectric displays have not yet appeared in the market or the
laboratory.
One company, Piezo Systems of Winchester, Massachusetts, has
been funded for the first phase of a project to build
piezoelectric displays with four and eight lines of forty
characters each. The number of lines in future displays may be
any multiple of four. The number of characters per line is
limited only by the maximum line length that would be acceptable
to consumers. Piezo Systems has thus far produced a proof-of-
concept model consisting of 4 columns and 4 rows. The company is
now awaiting funding for phase II of the project. A date of early
1996 is projected for producing the first practical prototype
with four lines of forty cells. While we can expect to see one-
line piezoelectric refreshable displays for many years to come,
it is clear that this technology will never be applicable to a
page-size Braille panel.
Braille computer screens and reading machines that rival a
Braille book will come when the tactile equivalent of the pixel
is designed. The monitor at the airport, the screen on a personal
computer, and a television set are examples of a remarkable
application of the phenomenon of phosphorescence. These displays
are manufactured by coating the inside of the front of a glass
tube with one or more phosphors which emit light when excited by
impinging electrons. An electron gun at the rear of the video
tube sweeps an electron beam back and forth over the phosphor
coating. By turning the beam on and off as it traverses the
entire area of the coated surface, some points are made to emit
light while others remain dark. It is thus possible to produce
patterns of light and dark points to form pictures. Each point of
light or dark is called a pixel--for picture element.
Sophisticated elaborations of this simple technology have led to
the development of all of the displays mentioned immediately
above, as well as high definition color television.
During the first U.S./Canada Conference on Technology, the
work of Dr. Toyo Tanaka with phase transition gels was cited as a
possible approach to designing a tactile equivalent of a pixel.
These gels exhibit the property of dimensional change in the
presence of visible or invisible light, temperature variation, or
an electric field. Theoretically it is possible to create a
smooth, flat panel containing cells filled with a gel material.
Braille patterns of raised dots could be produced on this panel
under computer control. Maximum exploitation of this concept
could result in a transducer capable of displaying high quality
Braille, raised lines, dots, textures, and other tactual
graphics. Professor Toyo Tanaka, Assistant Professor Steven Leeb
at MIT, and Dr. John Gardner at Oregon State University have
teamed up to explore this application.
These statements are from the MIT grant application to the
National Science Foundation:

Project Goals

The principal goal of this project is to explore
the practicality of fabricating actuators suitable for
use in an inexpensive Braille-type display that could
be used with a personal computer or other information-
processing tool. This project proposes to explore novel
actuation technologies based on polymer gels, which
could, in theory, be used to construct actuators that
provide direct linear motion quietly, swiftly, and with
high force. Small actuators based on polymer gels could
be used in a Braille display and in other miniature
machines.

One or more prototype Braille displays, [will be made] based
on these actuators, which will operate in concert with a standard
personal computer to provide a tactile display of Braille text
and graphics. The NFB R&D Committee welcomed the opportunity to
comment on the draft of the MIT proposal. We have endorsed the
application and will maintain communication with Gardner and Leeb
as this work proceeds.
For at least a decade there has been a steady murmur of
discontent among the blind as a number of consumer electronic
products came to market with displays and control systems that we
cannot readily use. Listening to this background of discontent,
one can occasionally pick out clear ideas of what should be done
to remedy the situation. Some say we need to pass a law to
mandate all industries to produce only products that can be used
by blind people. I might say that there is some agreement on this
point, but it ends at the conceptual level. No one has clearly
described the details of just how a universal display and control
system might be designed and implemented across the diverse field
of consumer electronic products. Some say Braille displays must
be installed; some insist upon synthetic speech on everything.
Whether Braille, large print, or speech is the medium, the
language required may be English, Spanish, or Swahili. Almost
everyone agrees that there should be no more touch panel
controls, in which each button is shown as a visible spot that
cannot be tactilely located. From out of the gray background, I
begin to hear the boys in the corporate board rooms murmuring
about the need to have all electronic equipment talk a common
language so that anything can communicate with anything else. It
is yet to be made clear why we might want our photocopier to talk
to our microwave oven.
Understanding the problem may be the first step in finding a
solution. The problem can be reduced to two elements: an ever
increasing diversity of products with visual displays on one side
of the problem and a finite group of handicapped individuals
requiring different modes of sensory input on the other side.
Stated in other terms, consumer products are now designed to
display information that cannot be decoded by blind and visually
impaired individuals.
We may be ready to recognize that there cannot be a global
solution to the problem of consumer access to electronic
products. Several solutions must be identified. It is at this
point that NFB offers a two-dimensional approach to bring
consumer products within reach of all disability groups.
Manufacturers of consumer products will be asked to make
simple and inexpensive provision to accommodate blind and
visually impaired consumers. The first is to employ only controls
that can be tactually perceived. Examples include all
conventional mechanical knobs and buttons as well as touch-pads
and membrane switches with discernable tactile borders or
markers.
The second accommodation requires that all products
continually send complete information appearing on their visual
display through an infrared light transmitter. From a hardware
point of view, this is a very simple design modification. It will
require the addition of one light-emitting diode (LED) and the
simple circuitry for modulating the LED with text information
that corresponds to the information on the visual display. In
effect, these new products will have two equivalent displays. The
LCD, gas plasma, or other visual device for the general public
and the infrared transmitter to display the equivalent
information to handicapped consumers. Development of a detailed
description of the protocol for transmitting information through
the infrared transmitter is one goal of an NFB research project.
It should be instantly apparent that blind people cannot see
and interpret information on a beam of infrared light any better
than they can read an LCD display. Something more is clearly
needed to make access happen. The blindness industry must respond
to the standard infrared display by offering a variety of
products to receive the encoded information on the infrared light
coming from the CD-player, VCR machine, programmable telephone
answering machine, et cetera, and then decode the information and
present it to the blind individual in synthetic speech, Braille,
large print, or other communication medium of his choosing.
Demonstration of a consumer receiver system is another goal of
the NFB project.
It is safe to assume that manufacturers and vendors will
respond to this new opportunity to serve blind and visually
impaired customers. We can expect to see the Braille 'n Speak,
Braille Lite, Braille Mate, and a host of other products equipped
with infrared receivers and associated software to translate
display information to Braille or speech. We can anticipate
speech synthesizers capable of functioning as the display reader.
We can also expect to see add-ons to retrofit these and similar
products so that they can be used to receive the display
information. For example a simple plug-in module could convert a
laptop computer to a large print display for an office telephone
operator. Imaginative vendors will think of additional, novel,
and useful products based on this technology.
Emerging research goals in the field of blindness that
command our resources, energy, and leadership include the areas
described above. Success in these areas--translating visual
presentation of information to spoken language and Braille;
design, building, and deploying a large affordable refreshable
Braille display; and the provision of a consumer access port to
electronic products--will together lower some of the barriers to
full participation by blind men and women in an increasingly
complex technological society. You can continue to count on the
NFB to be a supporter, a partner, and a leader in research as we
move into the future.
[PHOTO: Portrait. CAPTION: Ruperto Ponz Lazaro]

INTERNATIONAL COOPERATION IN THE FIELD OF TECHNOLOGY:
AN AGENDA FOR ACTION TOWARDS THE TWENTY-FIRST CENTURY
by Ruperto Ponz

From the Editor: Ruperto Ponz chairs the World Blind Union
Committee on Technology and is also one of the leaders of ONCE,
the Spanish organization of the blind. Mr. Ponz was invited to
attend the conference and address the group. He requested that
the English translation of his remarks be read for him since he
is not fluent in English. Ronald Meyer, who was recording the
conference, did so. Here is what Mr. Ponz said:

Let me first of all express my satisfaction for the
opportunity which has been given me to share with you my views
and goals as Chairman of the WBU Committee on Technology, and let
me also convey to you warmest greetings from the leadership of
the Spanish National Organization of the Blind (ONCE), in which I
have the responsibility of heading the Department of Social
Services. In the 1980's ONCE as an organization and Spain as a
country undertook the most dramatic and rapidly improving changes
in the field of services to the blind that have ever occurred in
such a short period of time anywhere in the world. Technology has
not been absent in this forward-looking transformation.
In the late fifties and early sixties, when I was attending
a residential school in Spain, the most sophisticated technology
we were using was the slate and stylus. A Braille writer looked
to us like an impossible dream. In the late sixties and early
seventies, when I was employed as a teacher in ONCE's educational
system, the Perkins Brailler and the cassette recorder were
already commonplace tools in our schools. In the early eighties,
when I became headmaster of one of those schools, the Optacon,
CCTV magnifying systems, and other optical appliances and low-
vision techniques were introduced in our system. Now in my
current field of responsibility, which also encompasses all
aspects of technical research and development, import of
equipment, and national production and distribution of low- and
high-technology devices, I am faced with the exciting challenge
of creating the necessary conditions to place within the reach of
blind people in my country the considerable advances that
electronics and computer science have brought about to benefit
blind and visually impaired people. And I can say now with pride
that we in Spain have moved from a position of total
technological dependency to one where we are beginning to be
technological contributors.
I have dwelled on the description of my technology-related
professional itinerary in order to give you an idea of where I
stand now. Although on a strictly personal level I remain
faithful to the stylus and slate of the fifties, on a
professional level I am committed to do everything possible to
assist in opening all doors of technological progress to my
fellow blind in Spain and all over the world.
I accepted the proposal to chair such an important committee
of the World Blind Union in order to share with others my
experience as a teacher and school administrator as well as a
member of the management team of a major organization in the
blindness field, and to put at the disposal of the international
community the human and other resources of ONCE. However, the
success of our endeavors will largely depend on the pooling of
imagination, critical judgments, and constructive suggestions of
people with knowledge and expertise from throughout the world. In
this venture the contribution from North America is essential.
Aware of the fact that international cooperation is more easily
said than done, I trust that we will succeed in making steps
forward in the years ahead in the achievement of some of our
goals.
Let me finally outline the main activities we would like our
Committee to be involved with between now and 1996.

1. Information collection and dissemination.
Further efforts need to be made to improve the mechanisms
for collecting information about research and development and the
availability of products and especially to improve the means for
making existing projects and products known to all their
potential beneficiaries. The advantages of electronic networks
must be fully exploited.
Mechanisms for effective and objective evaluation must be
further improved. A worthy example in this regard is the
International Braille and Technology Center of the National
Federation of the Blind in the USA.
The basic concept of these North American technology
conferences deserves to be applied at a world level. We do need
to create a truly universal forum for researchers, manufacturers,
service providers, and consumers. We must critically analyze
current realities, identify areas of cooperation, and establish
future planning mechanisms. Tentatively such a conference could
take place in the second half of 1995 or the first half of 1996.

2. Influencing general research and development, manufacturing,
and standard-setting bodies.
Strategies must be implemented to bring to the attention of
all those concerned the unique needs and potential of blind and
visually impaired people. We are potential users of almost any
product or service, but things are increasingly designed and
produced as though everyone could see.

3. Further defining research and development priorities and
setting up implementation mechanisms.
During the previous WBU term of office the Research
Committee undertook the meaningful task of determining
priorities. This needs further expansion and revision, and it is
imperative to make a serious attempt to implement those
priorities in a concerted manner.
The World Blind Union lacks resources of its own to promote
significant projects by itself, but we can do a lot through the
mobilization of knowledge and expertise and the pooling of funds.
We could explore the feasibility of establishing a WBU Award for
the most excellent contributions to the implementation of our
agreed priorities.

4. Involvement of consumers in the design and development
process.
We the organized consumers nationally and internationally
know best what we need and how it should be done. We must put
pressure on all concerned bodies to see to it that we are
consulted at the earliest possible stages. In this case the WBU
should be the representative voice of consumers internationally.

5. Technology transfer to the developing world.
If we do not exert the appropriate solidarity mechanisms,
advances in technology run the risk of widening the gap between
developed and developing countries. The majority of blind people
live in developing countries, and in essence their basic needs
are similar to ours. If their quantitative importance could be
brought to bear in the design and production of certain items,
production would become far more cost-effective for us also.
It should be one of our priority aims to assist in making
technology available everywhere. Rapid advances often make useful
products obsolete in a very short period of time. Such appliances
could be more helpful in the hands of students and professionals
in the third world than in our storage rooms. This is only one
possible example of an opportunity to carry out a positive
technology transfer.
In this exciting program I trust that I will be able to
count on the unreserved cooperation and the immense treasure of
knowledge and expertise which exists in the field of technology
in the U.S. and Canada.
Thank you for your kind attention.


[PHOTO: Portrait. CAPTION: David Andrews]

OBSERVATIONS ON THE STATE OF TECHNOLOGY FOR THE BLIND
by David Andrews

From the Editor: David Andrews is the Director of the
International Braille and Technology Center for the Blind at the
National Center for the Blind. Today he is one of the most
knowledgeable people in the world about Braille production and
speech technology. Here are his remarks:

In the next few minutes I hope to give you some observations
and insights into technology, past and present, that I hope can
be used to prepare us for the future. As the Director of the
International Braille and Technology Center for the Blind, I have
the opportunity to look at and work with all of the computer-
related technology which is available for blind persons. This
unique opportunity gives me a broad view of what is happening
today with technology. I would like to take the next few minutes
to reflect on what I have observed, both good and bad, including
some of my pet peeves.
I got my first taste of technology in 1983 with the old
Kurzweil Reading Machine. Even though I had one in my office at
the New Jersey Library for the Blind and Handicapped, I found
myself using it rarely. This was in part because it was usually
broken. It was also in part because it was difficult to
understand and really didn't do that great a job. I sat down with
the machine and read a whole book about telecommunications. It of
course repeatedly mentioned AT&T, which the machine insisted upon
calling a7&7. The machine was a technical achievement, and my hat
goes off to you, Ray Kurzweil, and certainly it was necessary to
get us where we are today. Unfortunately, it was more of a
marketing achievement than a reading solution. Today's generation
of machines is very reliable and much more able.
I next moved on to a VersaBraille Classic. We just called it
the VersaBraille at the time. By 1985 I added an Apple 2e
computer and Braille-Edit and an old Echo II. These tools were
necessary to realize much of the power of the VersaBraille. In
1987 I got my first MS-DOS computer, an old Zenith Z-159 XT, a
powerful machine at the time. In 1988 I moved up to a NEC 286
machine, and I added a 286 laptop and laser printer in 1990. In
1991 I bought a 33 Mhz 486 Zeos Computer, and I later bought a
smaller 486/sx to use to feed mail and messages to computer
bulletin boards in the Baltimore area. Along the way I have
bought, traded, and sold various pieces of access technology--I
sold my Optacon to buy my first computer, and I sold my
VersaBraille to buy my second. I have also bought, sold, and used
various speech synthesizers, owning as many as five at one time.
All of this is a long way of saying that what is best in
this field is competition. This is true both in the general
computer market and the access technology field. In the early
1980's you could count the number of high-tech devices on your
two hands and have fingers left over. Now in the International
Braille and Technology Center for the Blind we have twenty
Braille embossers, nine Braille translation programs, over
twenty-five speech synthesizers, twenty screen review programs,
five stand-alone reading machines, nine computer-based reading
systems, eight kinds of refreshable Braille displays, two Braille
laptop computers, seven portable electronic note takers, three
kinds of printers for creating Braille and print on the same
page, two devices which allow a deaf-blind person to use a
telephone, and a wide variety of miscellaneous software and
hardware, all designed for blind and deaf-blind people. It is
truly amazing when you consider that most of this development has
happened in the last five years or less. In less then three years
the International Braille and Technology Center for the Blind has
filled a 3,000-plus square foot room with devices and has had to
move to a space over twice as large. Dr. Jernigan, our Finance
Chairman, may well hope that the rate of acquisition slows down a
little so that we can stay in the new premises for a longer time.
A good example of competition and the way in which it has
improved things is in the area of stand-alone reading machines.
The first machines (financed, incidentally, through the National
Federation of the Blind) cost over 50,000 dollars and came on the
market some fifteen years ago. I am sure that Dr. Jernigan and
Ray Kurzweil could tell us some war stories about those days.
Kurzweil's current model is priced at about ten percent of the
cost of the original and is smaller and better. I have said to
people in the past that Kurzweil Computer Products made the
reading machine market, and Arkenstone made it competitive.
The other thing that has helped reading products immensely,
but isn't as available in other access technology areas, is
piggybacking on commercial developments. Optical character
recognition products, scanners, and OCR software are now widely
available and used. This has encouraged a number of companies to
develop products in these areas. Arkenstone, Kurzweil, and others
have benefited from this interest and effort. Unfortunately for
blind people, Braille printers, Braille translation software,
screen review programs, and speech synthesizers aren't general
market items. I would expect that this could change in the case
of speech synthesizers, but not for at least five years. I
believe that ultimately most computers will operate in part by
recognizing the operator's voice and responding to his or her
commands. If you are talking to your computer, you won't
necessarily be at the keyboard and won't be looking at it. Thus,
speech synthesis and voice prompting are natural outgrowths of
speech recognition. At this point cheap and improved speech
synthesis is possible. Until then our numbers are too small to
promote much research and development in this area. For all
intents and purposes, there have not been any major improvements
in speech for approximately ten years. We have seen incremental
improvements and a number of new products and, in a couple of
cases, lower prices; but there has not been a major improvement
in speech synthesis in some time.
The price of the DEC-Talk has dropped dramatically, and it
is increasingly used. However, the danger is that, because it is
pretty good and more affordable, we will accept it as the norm
and depend on it. Its widespread availability and acceptance
don't promote new development. Likewise, reliance on the SSI 263
chip, used by Artic Technologies, Aicom's Accent, the Braille 'n
Speak, and others, doesn't promote new products. It offers
relatively low cost, good performance, and acceptable speech
quality; and people know how to write for the chip; but it really
isn't that great. We are just used to it. In my opinion the best
thing that could happen to the speech synthesis field is that the
SSI 263 chip would go away. That would force us to develop
alternatives.
Another positive development is the increased involvement of
blind persons in the access technology field. There are quite a
number of very talented blind programmers out there, and a number
of important companies are owned and run by blind persons. Caryn
Navy of Raised Dot Computing and Noel Runyan of Personal Data
Systems, both of whom are here, are notable examples, and there
are others. I wasn't trying to leave you out, Ted, Larry, and the
others. There just isn't time for everyone.
There still aren't enough of us at the top, though. Look
around the room and observe how many of you are sighted and how
many are blind. Further, some of the big companies (TeleSensory,
HumanWare, and Kurzweil spring to mind) have few if any blind
sales reps. You might take note of Arkenstone, which has in large
part made its fortune thanks to the efforts of numbers of
locally-based entrepreneurs, many of whom are blind.
In a few years technology has become very important to many
of us. While it won't and can't replace basic skills like cane
travel and Braille, some of us couldn't do our jobs without it.
Now to my pet peeves: In working with all this technology, I
think I have a unique perspective on what could be better
overall. With the passage of the Americans with Disabilities Act,
access to information has been placed in a whole new light. It is
now becoming much more commonplace to get Braille agendas at
meetings or menus in restaurants. It has become easier to get a
Braille menu, in some cases, than to get a Braille manual out of
some of you. Let me give you a couple of examples. We purchased
the $15,000 David Braille computer from Baum U.S.A. in the summer
of 1992 and did not get Braille manuals until a year later. We
did not get Braille manuals for the DMFM/80, the $25,000
refreshable Braille display we bought from Baum, until late
October of 1993. We bought the DMFM/80 at the same time we bought
the David. This is inexcusable for completely Braille-oriented
products which will be used by blind people only. Baum U.S.A.
said that it was making changes and didn't want to publish the
manual too soon. If they had that many changes, perhaps the
product was released too soon.
I am not trying to pick on Baum. There are other offenders.
I finally printed my own Braille manuals for the Braillex IB80
and Notex 40 from Papenmeier of Germany. I was unable to get
manuals from two different companies--Adhoc Reading Systems and
ATR Computer. I am still waiting for Braille for the $16,500
Braillex 2D refreshable Braille display.
While I got Braille with the Alva Braille Carrier (a $9,000
Braille note taker/display sold by HumanWare), it was
unformatted, unburst, unbound, and all in computer Braille.
American Thermoform did the same thing with the manual for the
Braillo Comet, a $3,795 Braille embosser. On the low end, the
Braille manual that came with the Porta-Thiel (an $1,895 Braille
embosser, which is sold by Blazie Engineering, among others) was
atrocious. It also was unformatted and in Computer Braille.
Further, those of you who send out Braille need to be more
conscientious at times. For years TeleSensory has distributed
Braille spec sheets on its VersaPoint embosser. These sheets are
usually in Computer Braille. All people see is that it isn't in
Grade 2 Braille. You are only hurting yourselves by not
translating and proofreading your documents. Speaking of
TeleSensory and Braille, I just got a Braille document from them.
It concerns the Everest printer, which they sold in the past, and
the problems many have had with feeding the paper properly. The
paper-handling notice was in Braille, but the backs of all the
pages, which were printed in interpoint Braille, had words
missing from the right side of each line. It made for difficult
reading, as you can imagine. While the formatting and translation
were fine, it is obvious that none of the pages was proofed by a
Braille reader. and TeleSensory is not the only offender in this
area. They just come to mind as the last one to cross my desk.
Most of the screen review vendors do not offer Braille, not
even a reference card. Omnichron with Flipper and KANSYS, Inc.,
with PROVOX have traditionally offered Braille manuals upon
request. IBM had a nice, extra-cost hardcover manual with Screen
Reader for DOS but took over a year to come up with a Braille
reference card for Screen Reader/2 for OS/2. Many of you do not
label your disks or cassette tapes in Braille either.
So that I won't be perceived as entirely negative, I will
say that there is some good Braille out there. Telesensory,
Enabling Technologies, Kurzweil, and Arkenstone, among others,
have all traditionally offered good Braille manuals. Enabling
Technologies is noteworthy in that some of their manuals have
contained servicing instructions. They are also willing to send
some kinds of parts to individual blind users to install
themselves. It is nice to be treated like the adults we are. You
are in the business of providing hardware and software products
to blind people. Some of us read Braille; some of us like tape;
some of us prefer disk-based documentation; and some of us would
rather have large print. Each alternative is what we need,
though, and this won't change. You need to consider these
alternate formats as a part of the cost of doing business with
us. It isn't always enough just to put the manual on disk and
leave it to us. You owe us more than that, particularly with
expensive Braille-oriented products.
Before we leave manuals, I would like to touch upon the
writing itself. It looks as if some of you don't own a spelling
checker or don't have access to a good human editor. If this part
of your products is so sloppy, it makes me wonder about the
programming or inner workings. I have also come to hate most
European manuals. They tend to rely on many figures, tables, and
drawings--and the translation is often terrible. If you are going
to import a product to the U.S., you need to take the time to
produce a good manual that Americans can digest. We aren't
stupid--but the style, usage, and conventions are different here.
My next cross to bear is difficult installation programs.
This is particularly a problem with screen review programs and to
a certain extent may be unsolvable. If the user doesn't have a
working synthesizer, it can be difficult to get speech up for the
first time. However, there are things you can do.
Make initial installation instructions and warnings
accessible in a variety of formats so everyone can read them. I
get programs all the time that have the instructions on disk and
in print. If I don't yet have speech up, I am already stuck until
I can get a human reader. Don't bury important warnings. SlimWare
Window Bridge did this with warnings about the memory manager
QEMM-386. I didn't figure it out until I had lost one of my two
possible installations from their copy-protected installation
disk. This copy-protection is another bone of contention with me.
Virtually everyone in the general software market, as well as the
access field, has dropped copy protection because of the problems
it causes users. About the only people using it anymore are game
makers, who generally sell their products for less than $50 a
pop. If Synthavoice was offering SlimWare Window Bridge at a
minimal cost, the copy-protection might make sense, but at $695
it is one of the more expensive Microsoft-Windows-access
products.
The Window Bridge installation does do one good thing. They
are able automatically to identify and configure themselves for a
wide variety of synthesizers, simplifying the process. Others are
starting to identify more synthesizers automatically, but Window
Bridge initially made big progress in this area.
The Thiel Bax-10, a high-speed interpoint Braille printer
that cost over $80,000 at the time we bought it, comes with a
software-based setup program that is virtually impossible to use
with speech. It seems that they could do better.
Some of you could engage in more responsible marketing. I
realize that the competition at times is fierce, but you will
ultimately not do yourselves any good by misportraying the
abilities of your products. As I said earlier, technology, as
important and useful as it is, isn't a substitute for good basic
skills. We have seen both the BrailleMate and the Mountbatten
Brailler portrayed as solutions to our Braille literacy problems.
While these devices and others may be aids to literacy, they will
not magically make a blind child know Braille. They should also
not be substituted for using the slate and stylus.
A marketing ploy that seems unethical to me is the practice
of pre-announcing new products. This is telling people that
something bigger and better is just around the corner. It is a
problem for two reasons. First, it has the effect of freezing the
market, and the competition suffers. Second, some people
perpetually wait for the bigger and better thing. You and they
are denying them the use of potentially useful technology while
waiting for that perfect solution that XYZ, Inc., just announced.
It is understandable that some pre-announcement is
necessary. It would seem to me that one to three months, maybe
four months, is reasonable. But we have seen wait times from six
months to a year or more from companies like Artic Technologies,
Blazie Engineering, and Index, among others. These periods seem
unduly long. As a consumer I don't want to buy product X on
Monday and find that the company started selling Y on Tuesday and
that Y is much better. On the other hand, if I am waiting around
for Y for a year or more, then I didn't get the use of X for all
that time. There comes a time when we all must make the
technology plunge. If it is good and appropriate technology for
us, it will still be useful even if it isn't the latest and
greatest. Companies might offer trade-ins or upgrades, or suspend
sales of a given device, prior to a new one's coming on the
market. Ultimately all of you will be better served by attending
to the needs of your customers, not by bad-mouthing the
competition or trying to take their business away by pre-
announcing new products and freezing the market.
Finally, some of you exaggerate the specifications and
benefits of your products. Fudged specs are most noticeable in
the Braille embosser segment. You might rightfully point out that
you are not fudging. You are just measuring noise or printing
speed differently from me. While this could be true, I think some
of you could be more practical and realistic in your
measurements. One example is the Everest printer. Index and
TeleSensory have said that it prints at 100 characters per
second. While I have not measured this scientifically, this
appears to me and others to be the measure for printing one sheet
of paper. It doesn't take into account the time it takes to
change pages on this sheet-feeder type embosser. Most of my
documents, while not hundreds of pages, are longer than one page.
A realistic measure would account for a multipage document.
One of two things needs to happen. The first is that all of
you agree on how these things should be measured and described.
The second solution is that someone (those of us at the
International Braille and Technology Center for the Blind, for
instance) will measure them in the way we see fit and tell the
world. We may do this anyway!
I hope that you accept my remarks in the spirit in which
they are offered. While there are problems in this field, there
is also much good to be admired and noted. Your energy,
dedication, and commitment are outstanding. Most of us, if we
were in it for the money, should have taken the advice offered to
Dustin Hoffman in The Graduate and gone into plastics or
something. Most of you vendors are doing this because you are
doing what you want to do. I hope that we can all work together
so that you can make an honest living and offer blind people
better and cheaper technology at the same time. While not easy,
it is possible.



=================================================================
Mohymen Saddeek, President of Technology for Independence,
Inc., was the final speaker on the morning panel. "Technology for
Independence" was his title. A summary of his remarks appears
elsewhere in this issue.
=================================================================
[PHOTO: Portrait. CAPTION: Tony Schenk]

PRIDE AND PROFIT:
OBSERVATIONS OF A FREE MARKETEER
by Tony Schenk

From the Editor: Tony Schenk is the President of Enabling
Technologies Company. Mr. Schenk was the first speaker on the
Thursday afternoon panel. Here are his remarks:

As I thought about how to contribute to a dialogue which
brings together such a broad range of consumers from our
marketplace, I couldn't help recalling hotel magnate Conrad
Hilton's memorable moment on NBC's "Tonight Show." Asked by
Johnny Carson if he had a message for his customer base which
could be summarized in just a few words, Hilton shot back, "Yes!
Remember when you take a shower in my hotels, the curtain goes
inside the tub, not outside."
Now that's what I call getting your point across. Of course,
Mr. Hilton could afford to be brief because people already knew
what he stood for, since most of them had probably been in at
least one of his hotels at one time or another. Fortunately we
have been given a bit more time for our remarks than guests on
the "Tonight Show" typically receive, but otherwise my situation
is not so different from that of Conrad Hilton. You already know
me and the people who work with me through the products we
manufacture, deliver and support. Your impression of us has
already been largely formed by your experience with what we
offer, and there is nothing I can say here today which would add
very much to, or take very much away from, that impression. This
is exactly as it should be.
Throughout my eleven-year involvement with Enabling
Technologies, first as a developer of software and hardware and
later as president of the company, I have been acutely aware that
what we say matters a good deal less to the people we serve than
what we do or don't put into the boxes we pack and ship every
day. Nevertheless, there are a few observations I want to share
with this select group about some of the ways in which access
technology is bought and sold. Some strange things occasionally
happen in this wonderfully topsy-turvy, dynamic field of human
endeavor. One or two of these could perhaps affect what you pay
for access technology and what kind of choices you may have.
My first observation is that only the kind of free market we
enjoy together could have produced the stunning array of choices
which currently exist for buyers of access technology. It is
amazing to me how many companies and how many products have
managed to survive and even prosper in a marketplace which is
considered too small for notice by the giants in the technology
industry. I hope nothing ever happens to restrict this free
market situation, and I believe that any such development would
immediately begin to diminish the variety of products and
services available. But I digress. The wide range of choices
available to the consumer goes far beyond the matter of which
device to buy. And some of these decisions impose difficult
choices on us as a manufacturer.
There are a lot of ways of describing my job an

  
d the job of
my vice president, B.T. Kimbrough, but most of it comes down to
highly refined listening in order to make what we are going to do
next responsive to what the marketplace tells us it wants. I say
that our listening is a refined skill, because we have to do much
more than simply hear what people say on the telephone, in
national conferences, and at regional meetings. In order to truly
receive the message from the marketplace, we have to hear what
people don't say but imply in their buying decisions; we must ask
questions and pay careful attention to the significance of the
questions people ask us. And beyond that we have to be alert when
using our own products because this is our best chance to put
ourselves in the place of the user when we design the next
generation of product.
Much of the message about what the marketplace wants in a
Braille printer, which is our chosen field of focus, is so loudly
and universally expressed that I could recite it in my sleep, and
according to my wife I occasionally do. Consumers have come to
expect subsequent generations of product to contain significant
advancements over earlier models. Everyone wants to see the next
product cheaper, faster, lighter, more portable, quieter, more
flexible, and easier to operate. While at least two of these
characteristics, simplicity and flexibility, are somewhat
mutually exclusive, I dare say that any new product which does
not incorporate at least some of the issues of lower cost,
greater speed, less noise or greater versatility, will have very
little chance of success.
But step below this layer of certainty, and you start
hearing an ever-widening stream of differing and conflicting
priorities as you penetrate all the many submarkets which we call
a customer base. Many of our consumers want a Braille printer to
make Grade Two Braille without the intervention of a computer;
some want Braille and print on the same page, while others will
only consider the product if it can be leased or rented.
Regardless of its capabilities, some government customers will
buy it only if it can be obtained from a local dealer. (I will
have more to say about this trend later.) Still others will only
buy the product if they are given a sizable discount. (Again,
more of this later.)
Some state purchasing authorities disregard all product
characteristics except speed and price, apparently believing that
this will at least give the appearance of obtaining the best
product at the lowest cost. These purchasers are not end users,
but they control the money. Although the Braille readers who will
be using the machines are usually quite specific about the
product or at least the features they want the state to obtain
for them, the purchasing authorities are usually quite simplistic
in the specifications they set. As a result, users sometimes do
not get the device they prefer, and manufacturers are encouraged
to place undue emphasis on speed because this is the only major
characteristic specified in too many bids.
Purchasing agents who represent government and private
industry are among our most frequent and most challenging
customers. Anxious to make the transaction as simple and brief as
possible and insulate themselves from any possible later contact
with the Braille reading end user, these buyers sometimes insist
on involving a third party, a tactic which can have wasteful
consequences. Typically this third party is a local computer
dealer who has never handled an access technology device and has
no interest now except the making of a quick buck. Theoretically
this dealer is supposed to install the device, follow up with the
user, and deal with any small problems which might arise in
combining several separate products into a functional system. But
the dealers chosen for this role are not to be confused with the
reputable and capable dealers of access technology who can and do
add value to the transaction by providing installation and
support when needed. The inexperienced local computer dealers of
whom I speak add nothing to the transaction except the cost of an
extra middle man. The manufacturer must give the dealer his piece
and then absorb the extra cost of whatever support might be
needed, occasionally coping with dealers who try to look good by
making a sage-sounding diagnosis of a problem which could be
dealt with in seconds by someone knowledgeable.
These are modest but typical examples of what we might call
restrictions of convenience, which an informed marketplace should
be able to shrug off with little difficulty. Now let's speak of a
different, and a much more dangerous type of restriction. I'm not
sure what to call it; institutional power play describes it
pretty well. This twisted transaction usually concerns one or
another of the small technology display and training centers
which are happily springing up all over the country. These
technology centers generally have a mission to acquaint potential
users with expensive devices and give them a chance to select a
preferred one before making a sizable investment. Many of these
centers, including our hosts at the International Braille and
Technology Center for the Blind, carefully plan and fund their
acquisition of a complete, or at least a representative group of
competing devices designed for speech, Braille, and large print
users.
But a few such centers have taken a different approach. In
these institutions devices are not bought for the technology
center, they are donated. And the requests for contributions
usually go something like this: "We have no money, but we will
recommend your product if you give us one. If you don't, your
competitors will, and we will recommend theirs instead." We like
to have our products tested and reviewed and compared with other
machines, and we politely decline this opportunity to buy a cheap
and meaningless recommendation. But I have to wonder how many
users are getting a distorted view of what's available from a few
so-called regional or local technology centers who didn't raise
the money to do it right or didn't spend it on technology in any
case.
Again I emphasize that the International Braille and
Technology Center for the Blind pays for its demonstration units
in hard cash, as do many other reputable centers across the
country. Oh, they may ask if a discount is offered, but there is
no hint that a positive or negative recommendation hangs on the
answer, or that a purchase will not be made unless a specific
price is met. Incidentally, like many of our colleagues, we are
glad to offer substantial discounts to technology centers.
Sometimes they offer the consumer the best chance to make an
informed buying decision. But when it comes to a required
donation in exchange for a bogus recommendation, we can only say,
as many Americans have said before us: Billions for defense; but
not a cent for tribute. And my last comment on this subject is
that I have no intention of naming the institutions or people who
have approached us in the way I have just described. It is my
belief that simply exposing this regressive tactic in this
setting will generate enough negative reaction to see that it is
not repeated.
I do not want to leave with you the impression that we find
our involvement in this industry a negative experience. Quite the
contrary, the infinitely rich interaction between the people I
hire to design and build good tools and the people who take them
and make them mean something is extremely satisfying. And that
reminds me to comment on something I saw in a recent issue of the
Braille Monitor. We do not consider ourselves responsible for or
claim credit for creating a welcome rise in Braille literacy, any
more than a builder of good hammers deserves credit for a real
estate boom. We consider ourselves first and last to be builders
of precision tools, which do not so much empower people as they
give one more option to combine with many options blind people
can use to empower themselves. We are gratified by the rise in
the flow of Braille both in this country, Canada, and many other
parts of the world. We believe in the future of Braille so
unreservedly that we are prepared to stake our working lives on
it. We are proud to say that we have found Braille to increase
the productivity of each of our customers and with hard work and
dedication by our engineering, production, marketing, and
customer support staff, braille has produced a modest profit for
our investors. If our products are reliable, practical and
productive, I believe we have nothing to apologize for in terms
of that profit. If our products fail our customers, no apology
would be sufficient anyway.
And that brings me to my closing message, which is about as
close as I intend to come to the style of the Conrad Hilton
comment I mentioned at the beginning. When asked if they have a
message for all their consumers which can be summarized in a
minute or two, Tony Schenk and B. T. Kimbrough eagerly respond,
"Read the manual. Please, before you call or decide that it isn't
working right, at least look at the major headings. We have
shortened the manual for our newest products, so it won't take
very long. It's in Braille, it's in print, it's on disk. After
you've read it, call us if you need to. We thrive on the demands
of our customers. So go ahead! Push us and challenge us. All you
will do is bring out our best work, and perhaps we will wind up
doing the same thing for you."


[PHOTO: Portrait. CAPTION: James Morrell]

LISTENING FOR EFFECTIVENESS
by James Morrell

From the Editor: James Morrell has recently become the
President of Telesensory Corporation. This is what he said:

I want to thank Dr. Jernigan and the National Federation of
the Blind for inviting me to this conference. When the invitation
was first extended, my reaction was that there would be little I
could contribute to such a meeting. I am not an engineer, and I
have spent the greater part of my forty plus years in business in
non-technical assignments for non-technical companies. My
experience in this field is limited to several years as a member
of TeleSensory's Board of Directors and the last ten months as
its President and CEO. I am quite literally the new kid on a new
block.
My career has been spent running business operations ranging
from small start-up companies to a corporation with annual sales
of one and a half billion dollars, 55,000 employees, a dozen
operating divisions, publicly owned, and registered on the New
York stock exchange, and named several times as one of 100
companies that were the best places to be employed. From 1986 to
1991 I operated my own consulting company, lectured, wrote, and
joined the board of directors of half a dozen organizations and
spent a lot of time working for my alma mater in institutional
development activities (read this as raising Money, attempting to
vitalize alumni activities, and cultivating foundation and
corporation support for the college and its programs.)
I tell you this about me only to get to the point that in my
experience I did become expert in marketing and organizational
planning and developed a reputation as a manager who could lead
people and get things done and one who was particularly devoted
to the theory that business success stems directly from an
understanding of customer needs and wants. In addition, I found
that all sizes of organizations were more successful and more
rewarding in terms of the personal satisfaction of their employee
constituents when they developed, achieved, stressed, and focused
on customer satisfaction as the central philosophy and
orientation of management.
This has been my message to the employees of TeleSensory.
So, on reflection, I came to the conclusion that I might be able
to make a small contribution to this group--not in so far as
contributing ideas and concepts about specific and particular
technologies that may be of benefit to individuals with vision
loss seeking technological assistance and products, but rather on
the approach that I believe will yield the greatest number of
successful outcomes and contributions from the companies that
supply technology to this group of consumers.
Therefore, while at this conference, I would like to: (1)
Tell you how I believe supplier companies like TSC should
approach the decision of what developmental projects they will
do; (2) tell you some of the things TSC is planning to do and
get your response about the desirability of these actions; and
(3) most important, listen to what is discussed at this meeting
and learn from that discussion how TSC should proceed in the
future.
In short, I am seeking answers to the following questions:
1. What is TSC doing that it should not be doing? 2. What is
TSC doing that it should continue to do? 3. What is TSC not doing
that it should be doing?
For a company to define a rational and effective product
line, or group of products, it must do two things. The first step
is to analyze where it stands now, at this time, and do so with a
great deal of candor and honesty. We must look at them in terms
of customer acceptance, competitive challenges, profitable return
to the corporation, technological excellence and reliability,
probable future viability, and pride--pride of the company and
its people in the work of producing and marketing the product. Of
this list, all of which are important, customer acceptance and
corporate pride are far and away the most important factors.
*** Determining customer satisfaction and acceptance of current
products is not an easy thing to do, and it is not a simple thing
to do. There is no single measurement of customer acceptance that
gives one a total answer. Rather, customer acceptance is the
synopsis of a number of inter related measurements, all taken in
an objective manner--a single conclusion that blends the
measurements together. We are trying to develop such a system at
TeleSensory. We are measuring customer complaints and
compliments, not just in total number, but by specific topic. We
are doing surveys of existing customers, and general surveys of
others who participate in our industry, to try to determine why
people purchase, or do not purchase, our equipment. These take
the form of interviews, questionnaires, focus groups, and
listening to general feedback from the industry professionals
that we are in contact with on a more or less daily basis. We are
also interested in the comments of people who stop at our booth
at conventions, in the data we receive in letters from friends,
and in the feedback we get from our company sales
representatives, whether they be company employees or independent
business representatives. And we are trying to learn to listen to
the sales data we receive on a daily, weely, monthly and
quarterly basis. Numbers can talk too, if you allow them to do
so, rather than trying to use them to reach an already finalized
idea.
Mostly, determining where you are at the present time is
based on listening. We all have a number of possible ways to
listen and an infinite capacity to store the data we listen to.
The question is, do we do it?
The second step is describing where you want to go in the
future. What is the vision of the organization? What should you
be doing when you arrive at the place you want to be? How did you
get there?
Creating a corporate vision also depends mostly on
listening. You must listen to what the needs and wants are of the
market you intend to serve. You must listen to determine where a
new product is needed, when an old product should be maintained,
what things need to be fixed, what things need to be left alone--
in short, what will be the easiest things to do to satisfy your
customers and reach or attain your vision. Really, you must
listen in order to even have a vision.
So the first thing that I think technology based companies
will need to do to be successful in the future is develop the
capability to be very good listeners and synthesizers of the data
they receive from listening.
Then--and only then--the corporation needs to develop
product ideas based on three important screens. One is the screen
of core competencies, the second is the idea of strategic assets,
and the third is the screen of reliability and value.
The idea of core competencies is that any organization has a
few things that it can do very well, and rather than trying to be
all things to all people, the organization should stick to its
knitting and do what it can do....very well. Core competencies
tend to be technical competencies and they allow engineers and
designers to create products that utilize those competencies to
their fullest. As an example, we at TSC think we have three core
competencies..three technical competencies that we believe we
understand. They are tactile information displays, digital and
analog video processing, system and human interface software.
If we are right, and we are willing to bet that we are, all
our products should be based on one, or all, of these
competencies. So our first screen after determining what our
customers want is to ask if we can devise an answer to that want
by utilizing these three central skills.
The next screen is the concept of strategic assets. Every
organization excels in the performance of certain tasks. These
particular skills give the organization an extra edge in solving
problems that involve those skills. When the organization is able
to utilize those skills in serving customers there is a greatly
improved possibility of success.
At TSC we think our strategic assets include after sales
support, distribution system, manufacturing facilities and
capability, and federal program capability.
Whenever a new product idea needs to incorporate any of
these assets in order for the product to be made, provided to an
end-user, and utilized effectively by that end-user, we believe
we have an increased chance of performing well and therefore of
successfully adding that product to our offerings.
The third screen is the screen of reliability and value. Our
products cannot work intermittently. We must work consistently.
Therefore, it is not our first priority to design the most recent
technology into our products. It is not our first priority to
design the most cost-efficient products. It is our first priority
to design the most reliable products. Cost-efficiency becomes the
second most important item, and technology becomes the third item
of consideration. Of course, we want our products to incorporate
new, exciting technological capabilities, but only if they are
reliable and only if they can be provided to our consumers at a
cost that make owning and using them feasible.
Reliability is value-related. If the product qualifies for
consideration based on reliability, then value becomes the
determinant of selection between alternate product offerings. How
do we measure value? We think value is the resulting sum of
features and price. The management task is to know our customers'
needs and wants well enough that we can provide the features
needed, and desired, at a price that makes the product desirable.
To summarize, the products we develop should come from
listening to a wide variety of sources within the industry, a
broad sampling of our customers, and the feedback we obtain from
our own employees and others who come in contact with our
company. From gathering input from all sources we will determine
what we will do by screening those ideas against our core
competencies, our strategic assets, and our ability to create
solutions that are reliable and combine features and price in a
way that creates value to our end-users.
Dr. Jernigan asked me to speak directly to those products we
are planning for the future. As of this moment, my answer will be
less than complete, and I will explain what I mean by that
statement.
We do know that we are introducing two new refreshable
Braille products that update our existing Navigator line. One
product is an 86-cell unit, with a 46-cell unit that will follow
closely behind. These products are planned to incorporate up-to-
date technology utilizing on-board speech as well as an expanded
Braille capability. A second entry to this product line is a less
sophisticated product that is a slim line portable product made
to nest with a notebook computer. Both products will be in the
market place no later than March of 1994, and hopefully the
portable unit will ship before the end of 1993. In addition we
have already introduced the speech software for these packages
which is also marketed as a stand-alone product. Its called
ScreenPower and will be expanded to be the basic software driver
of most future TSC access products that you will see.
Products beyond these three introductions are not known at
this time. That's because we have launched a strategic study of
our blindness product line and what that line of products should
be. We have set up a special team of people to participate in
this effort, made up of members of our industry, existing
customers, product users who do not use our existing products,
members of our distribution team, and corporate staff members
including engineers and marketing managers who have considerable
experience in this area. We have given them a very broad
assignment: Listen, Review, Evaluate and Recommend the current
status of our products, tell us your vision of where we should be
in three to five years, and give us your recommended road map of
how we will get from the Here and Now to the There and Then. We
should receive their report before February 1, 1994, and we plan
to take immediate action to begin the process of incorporating
their recommendations into the future of TSC and its product
users.
There are some outcomes of the study that are predictable.
We will incorporate computers and a lot of software into our
products. These two parts of electronic technology are firmly a
part of successful current applications, and it is reasonable to
expect that they will continue to be central to solving future
problems. You should also expect to see products based on the use
of Braille cells and refreshable Braille applications. We know
that is basic to your view of product needs, and we agree with
you. You will also find speech as a part of the offering, and
frequently optical scanning as a related product. We expect we
will continue to need the capability of a separate, but
integrated note-taking capability, and certainly, we will want to
interface well with printers and other computer peripherals.
We expect our equipment will be used by individuals in the
workplace, including education as a workplace. A second
application will be individual use at home, and a third will be
as part of equipment configurations made by other manufacturers,
incorporating our skills and equipment in concert with theirs.
By concentrating on these three markets we will be able to
adapt individual products and the technology that drives them to
all three situations. Each situation has its own needs, but the
core competencies that lead to the specific solutions are very
similar.
Finally, our products will be affordable. We have adopted
this concept as a basic cornerstone for TSC in the future. Others
in the industry can attain this goal of affordability. What TSC
can do to further differentiate itself is to provide excellent
service and training. We have already taken steps to make these
additive services real. We are strengthening our service network
here in the East and have provided training resources to the
division that markets blindness products.
The future will require that we revise our mission. We are
no longer in the business of developing products for those with
visual loss. We are in the business of developing products for
people who have visual loss who want to utilize the products to
achieve their goals.
I hope my presentation has been helpful. I want to thank you
again for the opportunity to attend this meeting. TSC has been a
leader in this industry for many years. We intend to continue to
seek improvement in how we serve the industry, and we will not
feel that we have succeeded until we are certain that those we
serve are served well. Thank you.




=================================================================
The other two panelists in the afternoon session were Elliot
Schreier, Director of the National Technology Center at the
American Foundation for the Blind, and Deane Blazie, President of
Blazie Engineering. "Future Technology for the Blind: What is
Coming? What Can We Do About it?" was Mr. Schreier's title, and
"Research and Development at Blazie Engineering" was Mr.
Blazie's. Summaries of their remarks appear elsewhere in this
issue.
=================================================================



SUMMARY OF THURSDAY AFTERNOON DISCUSSION

During the discussion after these panel presentations the
following issues were raised and comments made:
Dr. Kurzweil urged that this conference establish access to
the Information Super Highway as one of its first priorities
because of the vast amount of information that will be available
to all users, including those who are blind, through it. But
graphics will be a significant component of the available data,
so we must solve the graphics-recognition problem.
Jim Fruchterman said that developers in this field have a hard
time finding research and development money from sources other
than their own pockets, and it would be helpful if sources could
be found to assist in this area. But perhaps an even more
pressing problem is that facing people who would like to buy this
expensive equipment but don't have sources beyond themselves or
their employers to help. The money-lending entities in this
country do not understand about access equipment, so they are not
usually willing to make the loans. This is a real problem which
should be addressed.
Dr. Jernigan pointed out that the National Federation of the
Blind has a low-interest (three percent) loan program for
technology purchases, and he announced that the organization
would increase that pool of funds from $60,000 to $200,000 in an
effort to help. Admittedly it won't solve the problem, but it is
a start. Paul Edwards pointed out that the Technology Act is now
funding programs in forty-two states, and there has been very
little effective programming for disabled people. This conference
might consider making a forceful statement to those who conduct
the program out of Washington, encouraging them to broadcast
widely any creative efforts to develop ways of funding technology
for disabled people.
Ritchie Geisel said that he has been appointed to an
advisory panel of CEOs working with the National Information
Infrastructure (NII), the Information Super Highway. People
interested in working with him on this project should contact
him. Paul Edwards immediately volunteered.
Tony Schenk then clarified his earlier statement to say that
organizations that base their evaluations of technology on
whether or not the technology has been donated to them make
things difficult within the industry. He was not speaking of
agencies that maintain display settings where the equipment is
examined by potential users.
Dr. Herie commented that agencies that find themselves
holding newly purchased but now outdated equipment are very
unwilling to pay hard money up front the next time around. He
urged vendors to admit when their products are outdated or have
not been successful.
Louis Tutt, President of the Council of Schools for the
Blind, offered the facilities represented by his organization as
locations for any experimental hardware or software that vendors
would like to have tested. He promised that it would be returned
at the end of the trial period.
Noel Runyan said that his company has been working with the
Recycled Technology Project, a Bay-area organization that
collects old technology for use by other people. Companies like
Blazie that have buy-back programs could help, and entities like
the International Braille and Technology Center for the Blind
could act as clearinghouses announcing what equipment and
programs were available. For example, three-quarters of the
Optacons ever produced are probably sitting out there on shelves,
not being used. They could be helping people.
Dr. Jernigan then explained that the NFB's policy with
respect to the evaluation of technology is clear. The NFB regards
itself as a watchdog on the field. But under no circumstances
would the organization's political assessment of a vendor or
individual color a technology evaluation. The minute that
happened, the entire system would be compromised. However, if a
product was good and the producer harmful to blind people in the
view of the Federation, the organization would have no hesitation
in adding to its excellent evaluation a statement urging
consumers and agencies not to buy the product. He went on to say
that he thought that a growing number of agencies and
organizations in the blindness field are increasingly concerned
about the attitudes of vendors toward blind people as well as the
quality of their products.
He concluded by saying that the ultimate solution for
finding the funds to purchase the technology that blind people
need is to see that they have good jobs with which to finance
their own purchases. Most sighted people own cars despite the
fact that they cost close to $20,000. Blind people should be in a
position to make equivalent purchases for themselves.


[PHOTO: Portrait. CAPTION: James Thatcher]

PROBLEMS AND CHALLENGES OF THE GRAPHICAL USER INTERFACE
by James Thatcher

From the Editor: Early in the Friday morning, November 5,
conference session Dr. Jernigan made several general introductory
comments about language versus graphics. He pointed out that
humankind's first effort at written communication was pictorial.
Though this was useful for conveying some kinds of information,
the pictures had to evolve into an alphabet and mathematical
symbols before truly complex ideas could be represented. There
are, however, types of information for which graphical
representation is faster and more accurate--maps and
architectural drawings, for example. But it is the nature of
humanity always to believe that the latest invention is vastly
superior to everything that has gone before. Radio was the rage
until television with its action pictures displaced it.
Dr. Jernigan concluded by raising the question whether the
Graphical User Interface is merely the latest fad, replacing
words once more with pictorial representations, or whether
computer graphics truly do convey information more quickly and
efficiently than words on a screen can. If the former is the
case, then we have a right to insist that the most accessible
operating system be used. If, on the other hand, computer users
really are more efficient working with the newest, graphics-based
system, blind people must not attempt to stand in the way of
progress.
The first speaker on Friday morning to address the question
of the Graphical User Interface was Dr. Jim Thatcher. He is the
Manager of Interaction Technology in the Mathematical Sciences
Department at IBM Research and a man whose concern for the
problems faced by blind computer users and intuitive grasp of the
way in which information is absorbed by those using screen-review
programs almost certainly are responsible for IBM's ScreenReader
Program for the OS/2 operating system. Here is what he said:

Blind computer users have been worried about access to
Graphical User Interfaces (GUIs) for several years. And, who can
blame them? They have been shut out of access to graphics
programs like Flight Simulator for DOS and from certain parts of
even their favorite text-based programs. The preview facility of
WordPerfect and the graph facility of Lotus 1-2-3 both enter
graphics mode and become inaccessible. But blind users may be
willing to get along without access to that very small fragment
of the computing environment.
The graphical user interfaces in MicroSoft's Windows under
DOS, in IBM's Presentation Manager under OS/2, X Windows under
UNIX, and the Apple MacIntosh are quite a different story. In
these environments all programs (including Lotus 1-2-3 and
WordPerfect, mentioned above) run all the time in graphics mode.
The environment is radically different, especially for blind
users and the people who develop access technology for them.
In this speech I will first explain the difference between
text-mode and graphics computing. Then, I will discuss the
advantages of the graphical user interface--advantages shared by
blind and sighted users alike. Together with these advantages, I
want to present my view of how a screen reader should respond to
this new environment. Turning to issues even more specific to
screen readers, I will then discuss two problem areas that I see
as critical to the continued evolution of screen readers for
graphical user interfaces. The first relates to automatic
announcement of error or status messages; the second is what I
refer to as the "active point issue."

Text-Mode Compared to Graphics-Mode Computing

Let's take a look at the two different computing
environments. Text-mode computing is simple and accurate. It uses
a model of the display that is a twenty-five by eighty array of
pairs of numbers. (The size of this array can vary. For example,
it can be 25 by 132 or 53 by 80.) The first number in the pair is
the ASCII value of the character that appears in that position on
the screen, and the second number is the attribute--it gives
color information for foreground and background colors and tells
whether or not the character is blinking.
For example, if the background is blue and there is a white
"I" in the upper corner of your text-mode display, then the first
pair of numbers in the array will be "73, 31." Seventy-three is
the ASCII value of capital I, and 31 is the number that
represents white on blue.
In text-mode computing everything is stored in display
memory, and it is stored there in a useful form. This is what I
mean by text-mode computing being accurate. It is accurate
because the display hardware uses only display memory to form the
image on the screen. Display memory actually contains this array
of pairs of numbers. This is what gets displayed, nothing more,
nothing less. The ASCII numbers that represent characters are
exactly what a screen reader sends to the synthesizer for it to
translate ASCII text into speech. When 73 is sent to the
text-to-speech device, it says "I."
For graphics-based computing there is still display memory,
but now the numbers in that display memory represent only pixels,
which are just dots of color. (Pixels are also referred to as
picture elements or PELs.) For example, a white on blue "I" is
made up of about 128 pixels--some are the color blue; others,
comprising the I, are white. The display memory holds only
pixels. It contains no ASCII values.
Several methods of reading graphics-mode display screens are
being discussed. The first method would be to use character
recognition or, better, document recognition to figure out what
is on that display. I believe that today character recognition is
feasible for a static screen, but not for the changing screens we
find in a computing environment.
The second solution is to create what Berkeley Systems first
called an Off-Screen-Model (OSM) when they introduced outSpoken
for the MacIntosh in November, 1989, the first screen reader for
a graphical user interface. To the best of my knowledge, all
screen readers for graphical user interfaces use some form of
OSM. The idea of the off-screen-model is to intercept everything
that is going to the display before it becomes pictures and
record all relevant information in a separate data structure
(data base) called the off-screen-model. The information recorded
there will include the text, its position, color, font, and
window handle. (A window handle is a tag that identifies the
window.) That is the minimum the OSM will contain. Different
screen readers will contain more and different information
depending, in part, on the level at which the drawing calls are
intercepted.
Once you have the off-screen-model, a screen reader can be
built that accesses the OSM instead of the display buffer, to
determine the text and/or icons that are on the display. The
screen reader uses the off-screen-model to report text on the
display, rather than using the display memory as it did for
text-mode computing.

Advantages of GUI's

The idea of Common User Access (CUA, IBM calls it) is good
news for both sighted and blind users. Basically one uses the
same ways of navigating in many different applications. Text-mode
programs were heading that way as they added menu bars, pull-down
menus, dialogs, and the like. Still navigation in text-mode
Word-Perfect 5.1, Lotus 1-2-3, and Quicken were all different.
The GUI versions (OS/2 and Windows) of these applications do in
fact have a common interface. The ways to get to menus, to move
around menus, to pull down menus, to interact with dialogs are
all the same.
That common access is reflected in how the screen reader
works. In addition to having a model of the display, the GUI
screen reader is hooked into messages and actions of the GUI and
so knows when menus are active or dialogs have appeared. Most
screen readers for GUI's will speak all of these events
automatically, without configuration of any kind.
What had become markedly complex and difficult for
text-based computing (action bars, color bars, pop-ups) is now
almost automatic. In 1988, before OS/2 1.1 was released, I was
showing a demonstration program that spoke menus, dialogs, entry
fields, and window titles as the user moved around the
Presentation Manager GUI. That is the easy part; that can be done
by a competent GUI programmer. But not only are these
standardized controls relatively easy for the screen reader, they
are the heart of common user access.
This common access includes help and documentation as well.
In all applications I have seen for OS/2 and Windows, F1 will
give context-sensitive help. That unification is truly welcome.
Online documentation is the rule rather than the exception, and
most applications use the GUI's information presentation
facilities, so getting around that documentation will be familiar
across different applications.
In summary, the use of standardized controls simplifies
access for blind users and sighted users alike. In addition, that
access is what had become so difficult with text-mode screen
readers. I believe these benefits far outweigh the designers'
real and perceived difficulties in creating screen readers for
the GUI environment and the blind users' concern about mastering
this new environment. It is my contention that, because of common
user access, the environment is easier to master than was the
text-based DOS environment.

Status Announcements--What Was Easy Is Now Hard

It seems to be the nature of things: just when the really
difficult parts of screen access get easy (menus, popup dialogs,
and the like), the easy things, like status messages, have become
difficult. For me to talk about this it is easiest to refer to
the facilities of IBM ScreenReader. Its 1984 precursor, called PC
SAID, had a concept of Autospeak: in a profile the user could
specify any part of the screen (or any expression) to watch; when
there was a change in that part of the screen or the value of
that expression, PC SAID would take a specified action, usually
making an announcement.
By the time PC SAID came out as the IBM ScreenReader
product, other screen readers had basically the same function.
The uses of Autospeaks in all screen readers is more or less the
same. The blind user needs to be notified of spontaneous status
messages or error messages that appear somewhere on the display--
maybe the top, maybe the bottom, but certainly not where the user
is currently focused. Some examples are the following messages:
"String not found," "Unknown Command," "The system will go down
in five minutes," "Drive A not ready," and "WAIT." For most
text-mode applications, the position where that message appeared
was fixed for the application, and with relatively simple
configuration activities the user could both hear the message
when it appeared and review it. Remember: the text-based model is
simple and accurate; so the message was to be found in row 23,
column 16, or row 1 column 76, or maybe two rows above the cursor
(as examples).
The situation for the GUI is totally different. Those status
or error messages are still there. But their location is quite
another question. To take just one example, the concept of row
23, column 16 is no longer relevant. The number of rows or
columns in a graphical screen depends on how much text you have
put there. Maybe a status message appears at the bottom of the
window, but when there is no status message, the last line may be
the text you are currently typing. A status message may
consistently appear at some pixel position on the display. But
windows can be moved, and they come up in different positions,
depending on the order of invocation. Relative positions are not
good either because windows can be resized.
I am aware that all of this may seem somewhat mysterious and
even alarming to blind users. But I want to discuss it here in
order to explain the IBM ScreenReader/2 solution to this
difficult problem because I believe it is the only one currently
available for GUI screen readers.
Those Autospeaks that were the key innovation of the first
IBM ScreenReader for DOS have been generalized to be able to
watch the results of procedures. Those procedures in our Profile
Access Language (PAL) can be defined to search the window tree of
the application for the status message and return the text of
that message (for example) if it is present. In this way the
status message can be announced. As I said in the beginning of
this section, it is not easy, but it can be done. I look forward
to other innovative interactive methods for accomplishing this
task as screen readers for GUI's evolve.

The Active Point Issue

This is a subject which is not discussed by application or
operating system developers or planners. This is, I believe, an
issue peculiar to screen readers and screen enlargement software.
Any screen reader must be able to follow and describe the active
point because the blind user must know what keyboard actions will
do, what the enter key will do, and where characters will be
placed when entered from the keyboard.
It seems to me to be easiest to describe the active point
issue with reference to text-based DOS computing. The cursor in
that arena is usually the active point. The information about the
cursor (position, shape, color) for text-based DOS computing is
held in registers in the display hardware. A screen reader in
that environment can know the cursor position by reading those
registers.
For contrast, in the mid-80's, IBM had a 3270 emulator that
ignored the cursor hardware and instead highlighted a single
character. The active point in that emulator was a single
highlighted character. The software was inaccessible by screen
readers of that time because the effort required to find that
highlighted character was just too great. Note that many screen
readers of that time, especially IBM ScreenReader/DOS, could find
that cursor character, but not in a useful or practical way.
In the late '80's DOS text-based programs more and more had
a CUA type interface, with action bars, pull-down menus, and
dialogs. These introduced the active point issue into text-based
computing. Screen readers adopted methods for watching for
highlight bar or color bar changes so as to track the active
point. Some applications (like Lotus 1-2-3) always had the
hardware cursor follow the highlight or color bar in an invisible
mode. In these cases tracking the active point was a lot easier.
Initially these applications were difficult for screen readers;
but, because the required information was available to the screen
reading program, those screen reading programs adapted.
For the graphical user interface the active point issue is
of special concern. There may be absolutely no way for a screen
reader to detect the active point. The reason is that the active
point--the cursor, insertion bar, highlight, or selected item--is
indicated with some graphical object--a line, a box, or a color
change. The ways to draw such an object on the display are
practically unlimited. The graphical screen reader depends on
knowing the drawing method. This means that it is quite possible,
as we know, for a screen reader to come out as a product, and the
next application (one not yet tested) could be completely
inaccessible because a cursor or highlight is drawn in a way not
yet imagined by the screen reader developer.
I can conceive of the following approaches to address the
active point issue:
1. Test as many applications as possible prior to product
release; try to generalize cursor and selector drawing methods
found in tested applications.
2. Make absolutely certain that the screen reader handles
all cursors and selectors that are standard for the GUI.
"Standard" means "created by a CreateCursor type of call or found
in standard text editing widgets."
3. Get on application developers' beta test programs so that
those applications' problems with active-point tracking can be
caught before they become products.
4. Encourage application developers to use standard cursors
and selectors in standard ways.
5. Add a hookable call to the GUI, which would inform screen
readers of the active point. This would be called by the standard
cursor routines in the GUI, for example, WinShowCursor in PM. But
also it would be called by applications using unusual,
non-standard cursors, if those applications wanted to be
accessible. When a screen reader hooked this call, it would be
passed the active point information it received.
In effect, the previous suggestion could be implemented
independent of the operating system, following the access-aware
ideas proposed by Berkeley Systems and similar ideas proposed by
Microsoft. Here a separate library would be provided, and the
non-standard applications would use it to inform screen readers
of their active points.
6. Do research. For example, can artificial intelligence
techniques be applied in such a way that a screen reader would be
able to learn that certain graphical objects are to be
interpreted as an insertion bar, cursor, or the active point?
I believe that the off-screen-model concept solves the
problem of the graphics screen for the graphical user interface,
and I think that technology is well understood by screen reader
developers. Common user access is a big plus for the computing
environment, and with access to the GUI this advantage is shared
by blind users. We need to work more on being able to configure
screen readers easily to announce automatically all we would
like; but that will, I think, come with the advancing screen
reader technology.
I mentioned the active point issue, because it is important
and because I do not know what the solution will be. It must be
remembered that most applications work just fine regarding this
active point issue. The ill-behaved applications are the
exception, not the rule.


[PHOTO: Portrait. CAPTION: James Halliday]

A QUESTION OF WINDOWS
by James Halliday

From the Editor: Jim Halliday is the President of HumanWare,
Inc. The following are the remarks he made:

It is, once again, an honor to be asked to speak to this
esteemed group of industry leaders. I want to express my
gratitude to Dr. Jernigan and the National Federation of the
Blind for this opportunity.
I recently attended two major exhibitions: REHA, in
Desseldorf, Germany, and Closing the Gap in Minneapolis. One of
the hottest topics at both conferences was access to windows.
Everywhere I turned, someone was announcing a new solution. Some
products seemed more complete than others. Some had more pizzazz.
Some seemed easy but limited, and others seemed powerful but
complicated. Some seemed to work fine in the Windows Program
Manager and selected applications, but there seemed to be some
question in everyone's mind about other applications. This
planted a seed of worry in my mind. Some solutions preferred
speech output, and others Braille. Several solutions included
varying degrees of both. As I walked from booth to booth, I felt
as if I was going into a car dealership for a test drive but kept
finding myself in the service bay, being shown how to change the
sparkplugs on the vehicle instead.
When working in a Windows environment, I use MS-Word for
word processing and Excel as my spreadsheet. Everyone
demonstrated their access working with Word, but when I asked
about Excel, I heard comments like, "We haven't implemented that
yet," or "That application isn't written to Microsoft's standard
rules, so we can't work with it, yet." This last comment shocked
me because I was referring to Excel, a program actually written
by Microsoft. How is it possible that Microsoft does not keep its
own rules?
I wandered from booth to booth and programmer to programmer,
becoming more confused by the minute. I learned that screen
readers require an off-screen model of what appears on the
screen, and this text-based model is actually what the access
program reads. Some companies are writing their own off-screen
models; some are using the one under development at Berkeley
Systems. Some seem more dependent on this model than others.
I found it hard to know what questions to ask. All of the
developers sounded brilliant and knowledgeable, and I was
becoming more confused by the minute. They all professed to
having the best concept, if not the best solution. I asked myself
how this was possible, yet I was too ignorant to ask intelligent
questions that would expose the strengths or weaknesses of a
particular product. It became increasingly clear that, in order
to know what questions to ask about Windows, one must have a
fundamental understanding of the Windows environment itself and
the unique challenges of access.
Part of my problem was a perspective rooted in the MS-DOS
environment. DOS is a fundamentally different paradigm from
Windows. Over the years the DOS environment has become concrete,
like a hard-boiled egg. It doesn't always stand up the way we
want, but at least we can get our hands around it. Windows, on
the other hand, seems more like a sticky, gooey raw egg that runs
through your fingers each time you try to pick it up. I guess
that's why they call Windows a GUI (gooey). (Up till now I always
thought GUI stood for Graphical User Interface). Of course, gooey
is okay when mixed with the proper ingredients--raw eggs can be
extremely versatile, whether you are baking a cake, whipping up a
Hollandaise sauce, holding together a meatloaf, or making a
gourmet omelet. The fact that it starts out as a slippery mess
shouldn't deter us from implementing its use. Success comes from
having the courage to keep trying things despite the inevitable
mistakes we make. One does not become a gourmet overnight, and
Betty Crocker is not going to come to our rescue with a basic
cookbook for Windows (although such a cookbook might not be a bad
idea).
Many of us are already successful computer users.
Sophisticated access technology has enabled people who are blind
to function on an equal basis within MS-DOS applications. In
other words, we know how to cook, and we know the kinds of foods
that taste good. But cooking in DOS is more like cooking over an
open fire. Just give me a cast iron pot and the appropriate
ingredients, and I'll cook up a great stew. Windows, however,
gives us a full kitchen in which to work. We can have several
things, including our stew, simultaneously cooking on the stove,
while at the same time bread is baking in the oven, veggies are
thawing in the microwave, and pop tarts are crisping in the
toaster. Accomplishing several tasks at the same time (multi-
tasking) is one of the real strengths of Windows.
Another thing that Windows does for us (when the rules are
properly kept) is that, even though various appliances
(applications) have different capabilities and functions, the
basic controls (screens, menu bars, and icons) have a consistency
that, once learned, makes it easier to understand new
applications. In other words, instead of the stove's having
knobs, the oven's having dials, the microwave's having push
buttons, and the toaster's having levers, all of these
applications have the same basic interface. Once you learn the
interface, your knowledge is more easily transferable to other
appliances (applications). Even if you walk into the laundry
room, the interface on the washing machine will seem familiar and
thus easier to learn how to use with minimal reference to the
owner's manual. Understanding this interface is the initial
challenge we all face with Windows. For a sighted person the
learning curve can be quite rapid because of the visual nature of
the display. For a person who is blind, however, another form of
access is necessary. Part of our problem is that we need good
access to learn how to interface with this wondrous new
environment because without good access it doesn't seem so
wondrous. In fact, it often seems monstrous.
So how do we determine what is considered good access
without understanding Windows in the first place? Are we putting
blind users in a Catch-22 situation where they can't determine
good access without understanding Windows, yet they can't learn
about Windows without good access? If this is the case, we are
all totally dependent on specialists. Ahh yes, specialists will
know the answers, and we will just have to trust what they tell
us. But as I walked around the exhibits in both Desseldorf and
Minneapolis, I kept hearing myself say, "Why is that important?"
I heard answers like: "Our program is an application; so, even if
the operating system changes, our solution will still work."
Another said, "What good is transferability between operating
systems, if you still can't run most of the major applications
written for Windows? Our program can access almost all
applications!" In reply, another said, "That other program will
be obsolete when the operating system changes, and they'll have
to rewrite the whole thing!" And another said, "Anyone who tells
you he can access any application is lying through his teeth!" So
much for the specialists.
As I wandered from booth to booth, I realized that everyone
was talking about Windows 3.1. This is essentially a DOS-based
system. In other words, it is initially loaded from a DOS prompt.
In another year or year and a half, Microsoft plans to release
Windows 4.0, an operating system that some say will be
independent of DOS and others say is essentially the same
platform as 3.1. Will all or any of the current access solutions
work with Windows 4.0? They all cross their fingers and say, "We
think so," or they say, "It's going to take a lot of
reprogramming." In addition to Windows 3.1 and 4.0, there is
Windows NT (another operating system from Microsoft specifically
aimed at network applications), but Microsoft needs to release
the appropriate hooks to developers before access will be
possible, and questions regarding security are lurking in the
background of access to NT. OS/2 is another graphical user
interface under which you can run Windows 3.1 applications. This
appears to be one of the most stable solutions currently
available. How will that work with Windows 4.0? Then there are
Unix and X-Windows, for which there has been no direct access at
all. (I believe IBM has been working on Screen Reader for Unix,
but I don't know the status). As I pondered the number of
unknowns, I became even more skeptical that we as an industry
really know what questions we need to be asking.
Accessing the Windows GUI environment appears to be one
problem, and accessing Windows applications such as word
processors, spreadsheets, data bases, communications programs,
and so on is yet another. Most access solutions appear to work
reasonably well within the basic environment, but they all seem
to have different capabilities as soon as an application is
loaded. I was told at the conferences, "We have to write special
drivers for each application, but we haven't finished that yet."
I asked, "Is the solution simply a matter of writing a driver?"
The answer to this question varied depending on the specialist
with whom I was speaking.
When walking around those exhibits, I began comparing MS-DOS
and Windows in terms of driving versus flying. DOS screen readers
allow users to take the wheel and drive around a two-dimensional
screen that displays a fixed grid of eighty columns by twenty-
five lines. A Windows screen reader, on the other hand, allows us
to fly through what seems more like a three-dimensional screen,
in that there can be multiple layers of files, applications, and
utilities displayed as a combination of windows, consisting of
graphical icons, menu bars, toolbars, rulers, text, and graphical
text. (Yes, there is a difference. Screen readers can interpret
text, but not graphical characters that look exactly like text.)
To confuse the issue still further, there is no fixed grid in
Windows that can be used as a base-line. All screens are
presented in a variable number of lines and columns. With the
exception of the consistencies inherent in the GUI environment
itself (menu bars, drop-down menus, prompts, etc.) every new
screen can be fundamentally different from the last one, e.g.,
one line may have ten characters, and the next may have a hundred
fifty.
The more I wandered through the exhibit halls discussing
Windows, the more I drifted back in time, vividly imagining those
magnificent men and their flying machines, courageous explorers
searching for solutions to flight, ingenious pioneers on the
brink of invention. But my metaphor had a chronological glitch
because DOS screen readers are already as sophisticated as
today's automobiles, where Windows screen readers seem to be
still exploring the fundamentals of flight. This has its
advantages and disadvantages. On one hand, we have a better
understanding of the power that is possible and needed based on
our experience in DOS. (Of course, it took us thirteen years to
get to this point with DOS.) Nevertheless, we require and expect
equal, if not better, sophistication in a Windows screen reader.
Jobs are at stake. The pressure is on. Everyone has a flying
machine, but I keep asking myself, "Does anyone have an airplane
that can really fly?" Perhaps they all do, but how do we know?
And how do we find out? If we talk to these magnificent men, we
find that they can all keep their machines in the air. Some of
them can even keep them from crashing throughout a complete
demonstration. But these machines don't come with a pilot, and
learning to fly a plane is much more complicated than learning to
drive on grid-like roads in the land of DOS.
How about some flight training? That's a good idea. But who
is going to pay for it? Nobody seems willing to pay for training
in the U.S. Plus, who is going to pay for travel expenses? We all
know about travel cutbacks. Can we expect users to learn to fly
without any training? How about a flight simulator? Tutorials
might be a cheaper way to teach someone to fly. Another good
idea, but do we teach Windows or access to Windows? Which comes
first? I can hear our Catch-22 problem re-emerging here.
We tend to forget pain once it has passed, but I would like
you to go back and try to remember what it was like learning DOS
the first time. It was an ugly experience for the average
computer user, but the fixed grid in DOS made it possible to
understand and ultimately to grasp. Oh, there were some whiz kids
who learned to drive without any help, but flying presents a
whole new set of problems.
I am concerned about our collective naivete with regard to
our expectations for Windows access. I am concerned that we do
not even know what questions to ask. I sent one Windows access
program to three different people to evaluate. Each came back
with totally different problems and questions. It was almost as
if I had sent out three completely different programs. I believe
that we must put a priority on creating a forum for dialogue at
all levels of our community, from the specialists to the novice
users, if we hope eventually to make intelligent decisions about
real solutions.
I would like to propose that you, as the major leaders in
our industry, accept the challenge to create this forum for
dialogue. Betty Crocker is not available with a Windows cookbook,
but, if we publish a monthly newsletter specifically dedicated to
questions and answers regarding Windows, we will soon have a
cookbook. This publication can be available in print, large
print, and Braille; or perhaps an audio format would be more
efficient and cover more information. We have to be careful about
sending out a fire hose and expecting people to drink, but we
must also recognize the urgent need for quality information.
Although developers, manufacturers, and distributors can be
valuable sources of information and support, I don't think that
HumanWare or any other company can take the lead on producing
this publication. Regardless of our objectivity, the unavoidable
commercial connection would taint our credibility. This cookbook
must be created month by month by one of your organizations or a
consortium of the organizations represented here today.
Part of the cookbook must focus on the GUI environment and
why flying can be so exciting. Understanding the Windows paradigm
is essential to this excitement. Otherwise we will remain
frustrated trying to drive a 747 around a poorly marked tarmac,
complaining about how much nicer it is to drive our DOSmobile.
Flying offers some exciting new thrills, but how do we get
access? Another section must focus on access solutions and what
questions we need to be asking developers. We need in-depth
questions regarding access

  
to specific applications and why some
applications seem more challenging than others. We need to learn
which questions to ask in order to understand the fundamental
differences between various screen readers better. How do we
understand the value of their inherent strengths and limitations?
Not everyone needs a 747. For some people a Cessna will do just
fine. We need a section on training. Who's going to do it? How
will trainers be compensated? What techniques work best in
teaching the fundamentals of Windows? And the list goes on.
Perhaps the metaphors I've used in this speech have been
ill-conceived, but this further drives home the need for better
information sharing. If there is something I or HumanWare can do
to help get this cookbook off the ground, please give me a call.
Ignorance is our worst enemy, and it is time we recognized our
mutual responsibility, as leaders in this industry, to educate
each other and the people we serve. I am not talking about
reviewing specific products, but rather giving our readers a
basic understanding of the questions that need to be asked and
answered regarding Windows and Windows access. The more
intelligent our questions become, the more we will each be able
to evaluate the difference between a flying machine and a
reliable solution that will enable us to fly as high as the GUI
bird allows.



=================================================================
"Microsoft's Approach to the Graphical User Interface and
Accessibility" was the title of remarks by Greg Lowney, Senior
Program Manager for the Accessibility and Disability Group of
Microsoft Corporation. A summary of his remarks is reprinted
elsewhere in this issue.
=================================================================


[PHOTO: Curtis Chong seated at computer in International Braille
and Technology Center for the Blind. CAPTION: Curtis Chong.]

PROBLEMS AND CHALLENGES OF THE GRAPHICAL USER INTERFACE
by Curtis Chong

From the Editor: Curtis Chong serves as President of the
National Federation of the Blind in Computer Science. He is also
a Senior Systems Programmer for IDS Financial Services. He is
extremely knowledgeable about computer access for blind people.
Here is what he said:

It seems that there was never a time when blind people
weren't concerned about how they would access this or that
computer system. Back in the 1960's, the problem was being able
to read punch cards and computer listings. In the seventies and
early eighties, punch cards and computer listings gave way to
video display terminals, which in turn gave way to the
microcomputer. Each step in the evolution of computer technology
posed a different set of problems for the blind; and, with
varying degrees of success, each challenge was confronted and
overcome.
For the blind the access challenge for the nineties is most
definitely the graphical user interface, or GUI, as it is more
often called. It is not that GUI applications are anything new.
Many GUI programs have been written to run under the Disk
Operating System (DOS), and for the most part these programs were
of little concern to the blind. This is because in the DOS world
GUI programs were in the minority, and the operating system
itself was compatible with existing screen-reading technology.
Moreover, most of the commercial software that blind people
needed and wanted to use was written specifically to run under
DOS and displayed information in text on a screen with fixed
dimensions--typically, 25 rows by 80 columns.
Access to text-based applications running on computers has
made it easier for blind people to work in a wide variety of
jobs. These have included customer service, pizza order taking,
computer programming, and secretarial positions, to name only a
very few. Blind secretaries will tell you that with a good word
processor it is far easier today to proofread documents instead
of relying upon one's ability to type perfectly all of the time.
Computer programmers will tell you that it is nice to use a
computer to write and debug programs independently on the big
mainframe. Some people will even go so far as to say that the
computer has opened up a whole new world of information and given
new freedom to the blind. However, I don't know if I would go
quite that far.
Anybody who knows anything about the computer industry will
tell you that the word "computer" is synonymous with "change." It
would appear that the industry is now undergoing a fundamental
shift. Text-based operating systems such as DOS and Unix are
being overlaid by graphically-based shells such as Windows and X
Windows. In addition, new graphically-based operating systems
such as OS/2 and Windows/NT are gaining acceptance. One reason
for this shift, I believe, is the fundamental desire on the part
of sighted computer users to be free of the physical restraints
placed upon them by a character-oriented screen. People are not
content to have simple text presented on a computer display. They
want pictures, icons, pop-up menus, dialogue boxes, and other
pictographic representations. Application software developers,
cognizant of this desire, are developing new versions of their
products to run under graphical platforms. Combine this with the
growing use, in today's major corporations, of graphical
front-end systems to make applications more user-friendly, and
you have what is shaping up to be a severe problem for the
blind--particularly, the blind person whose job depends upon
having independent access to the computer. Even blind people who
use computers at home cannot escape the problem. More and more
new releases of off-the-shelf software are being written to
display information using the graphical user interface, meaning
that for home use it is becoming more difficult--if not
impossible--to get the latest and greatest program for the
familiar DOS environment.
As consumers we find ourselves in a situation in which more
and more of the systems we need to do our jobs are less
accessible to us than they were even two years ago. To an ever
greater extent companies are embracing Windows, X Windows, OS/2,
and other graphically-oriented platforms. For the blind this
means that knowledge of and access to a text-based screen reading
system simply won't cut it. Although in some cases, with a little
sophistication and some technical know-how, we can continue using
a PC that runs only DOS applications, today, as many of us look
for jobs, we are running squarely into the GUI problem. Moreover,
as text-based applications give way to the more user-friendly
graphical user interface, those of us whose jobs already require
the use of a computer are in real danger of becoming unemployed.
Some rehabilitation agencies, lacking sufficient creativity,
have dumped their blind clients into customer service or other
service jobs requiring immediate access to computerized
information. I am here to tell you that, unless something is done
very soon, your clients will be walking the streets looking for
work. Why? Because the graphical user interface is simply too
attractive to system designers, who are continually trying to
make the computer easier and more visually appealing to use. For
them the graphical user interface is the best way to accomplish
this goal.
We are fortunate in that a growing number of screen-access
technology companies are developing programs to address the GUI
problem. An early contributor, Berkeley Systems, Inc., developed
the one and only speech screen-reading system for the Apple
Macintosh: outSPOKEN. I don't remember exactly when outSPOKEN was
released, but I think it is important to recognize that outSPOKEN
represented the very first attempt to provide blind people with
access to a GUI platform.
At the 1992 convention of the National Federation of the
Blind, IBM demonstrated its screen-reading system for the
graphical OS/2 Presentation Manager. At the 1993 NFB convention,
Artic Technologies and Berkeley Systems exhibited their Windows
screen-reading products; and two additional companies,
Henter-Joyce and Syntha-Voice, discussed their approaches to
making Windows accessible to the blind at the NFB in Computer
Science meeting. I am aware that even more companies are working
on, but not officially announcing their screen access products
for the Windows platform. No one has announced a commercial
package for access to the X Windows platform yet, but I believe
it is only a matter of time before we see something in the
marketplace. Strangely enough, the largest company in the
blindness field, TeleSensory, has not given any indication that
it is working on a GUI-access product.
Although it is improving at an accelerating pace, GUI screen
access technology for the blind is still immature. I would say
that we are at about the same stage in our ability to access GUI
applications as we were in the early 1980's, when blind people
first began gaining independent access to microcomputers such as
the IBM PC and compatible machines. However, there are some
significant differences. Ten years ago, when blind people began
taking an interest in using microcomputers independently,
companies such as IBM and Microsoft couldn't be bothered with
such an economically insignificant group. The screen access
programs we needed had to be developed by third-party companies
with little or no help from these key players. Today, partly
because of anti-discrimination legislation and partly due to
increased understanding, major computer companies such as IBM and
Microsoft are giving serious consideration to the question of how
to make the GUI accessible to the blind. IBM, which now has a
major presence at National Federation of the Blind conventions
and other conferences where technology for the blind is
discussed, attacked the problem by developing its own screen-
access program: Screen Reader/2. Microsoft, which is only now
beginning to devote some resources to the problem, is providing
technical information to third-party developers on the theory
that this will give them a jump start in developing screen-access
programs for Microsoft GUI platforms.
In discussing the problems and challenges of the GUI, we
should also take note of the Unix operating system and the X
Windows graphical user interface. For here, too, the blind have a
problem. Although we can use a PC to communicate with Unix over a
network for access to text-based applications, X Windows, being a
graphical interface, is not as straightforward. Yet Unix and, by
extension, X Windows are gaining acceptance in the workplace. How
will blind people use this interface independently?
In this regard the work of the Disability Action Committee
for X (DACX) bears watching. This committee is directed by the
Trace Research and Development Center in Madison, Wisconsin. It
consists of Unix workstation vendors such as Sun Microsystems,
Digital Equipment Corporation, and IBM; researchers such as the
Trace Center and the Graphics, Visualization, and Usability
Center at the Georgia Institute of Technology; screen access
vendors such as Berkeley Systems; the X Consortium; and other
interested parties. The goal of the committee is to design and
implement standard access solutions to X Windows for people with
various motor and sensory impairments. I can tell you that a good
deal of the committee's work focuses on how to make X Windows
accessible to the blind. However, aside from the prototype
developed by the Mercator Project at the Georgia Institute of
Technology, no commercial screen-access programs exist for X
Windows today.
Although the GUI may be more intuitive and appealing to
sighted computer users, it is not intuitive and appealing to the
blind--at least not yet. Any blind person who has to learn to use
a graphically-oriented system will find that a significant effort
is required to master the intricacies of this interface. Even the
most technically sophisticated among us will need to learn more
about how GUI applications work before we can begin to formulate
any recommendations of substance in this area.
There is one important question that as blind consumers we
must continue to raise with developers of access technology for
the GUI. Can graphically-oriented information--including icons,
dialogue boxes, push buttons, scroll bars, and the like--be
conveyed to a blind computer user efficiently enough to permit
him or her to compete with sighted colleagues? Developers of GUI
access programs are, of necessity, concentrating on trying to
convey the textual information that is displayed on the graphical
screen. To the extent to which access programs for the GUI
attempt to convey information about icons and other pictographic
objects, efforts seem to be oriented toward presenting the
information as menus or lists from which the user makes a
selection. Inasmuch as most of the blind computer users with whom
I am acquainted have very little experience in this area, it is
not clear whether this approach will be the best one in the long
run.
I would like to say a few words to the trainers and
technology specialists who work in rehabilitation agencies for
the blind. You cannot content yourselves with present knowledge
and expertise, which is largely centered on DOS-based screen
access systems. Although these systems may have played a key role
in helping some blind people to get jobs, they will soon become a
thing of the past. You must keep up with what the bigger
companies are doing and gain experience with platforms such as
Windows, OS/2, Unix, X Windows, and GUI applications used by
today's businesses. Equally important, you must keep pace with
the developments taking place in the screen access technology
market--for the developments will come, and at an accelerating
pace.
Today the graphical user interface presents many problems
and challenges for the blind. All of us--consumers,
rehabilitation professionals, and screen access technology
developers--need to stop wishing for the GUI to go away and
confront the problem head on. The major companies developing
graphical operating systems, most notably IBM and Microsoft, need
to continue their efforts to help blind computer users access and
feel comfortable with their graphically-oriented platforms. Also
these companies need to make it easier for third-party developers
of screen-reading technology to reach into a GUI application to
extract the information that a blind person needs in order to use
it. More important, the computer industry as a whole needs to
make accessibility a primary consideration instead of an
afterthought.
As I said earlier, computers are synonymous with change. We
shouldn't look back. We should forge ahead and ensure that
everybody gets a crack at this technology, which promises so
much.

SUMMARIES OF PRESENTER REMARKS

From the Editor: As noted earlier, several of the speakers
did not have prepared remarks. The following are summaries of the
presentations they made to the conference:

[PHOTO: Portrait. CAPTION: Mohymen Saddeek]

Mohymen Saddeek

Ever since I entered this industry, I have been frustrated
by the gap between technology for the blind and that intended for
the general public. Broad market producers have been slow to
recognize the possibilities for talking technology. Now that they
realize there is actually demand for such items, the cost is
finally coming down.
My company, Technology for Independence, seeks to identify
the area between general-market technology and technology for the
blind and then work in that niche, where costs can be
underwritten by both markets rather than the blindness market
only. For example, we approached Johnson and Johnson, producers
of the Lifescan blood glucose analyzer, to see about developing
technology that could make it speak the information it gathers.
Company representatives admitted they had never recognized the
212,000 blind diabetics in the United States and the 47,000 new
ones each year as a significant potential market for the
Lifescan.
I don't mean to criticize producers in our field who develop
devices designed to do a single, very specific thing. Their
products are necessarily expensive. In fact, my company has
modified a commercially available cassette recorder to create our
Talkman 5, which is very small and records and plays four tracks.
By the time we designed several necessary parts and made other
modifications for use with National Library Service cassettes,
the cost had almost doubled. Many people were still very happy
with the product, but I was not. In the end Dr. Jernigan, Dr.
Herie, and Mr. Geisel of Recording for the Blind helped to
increase the market and therefore lower the price. It shows how
important cooperation in this field can be.
I do think that much can be accomplished simply by going to
the mainline producers and pointing out that blind consumers
can't use some of their products or that they have to pay much
more for appropriately modified versions. We worked with one of
these companies and helped them to produce a product that talked.
They produced thousands and sold a few hundred to blind people,
and the rest they successfully sold to sighted customers who
found the speech option attractive.
Of course, we are interested in remaining competitive, but
our mission is to produce high-quality products for blind people
as inexpensively as possible. In talking with producers of
medical technology, we have learned that it may actually be
possible for them to manufacture blood glucose analyzers so that
they could be made to talk from the beginning or to develop them
to do so with only a slight further modification. If it can be
done, the talking unit might well cost only $15 or $20 more than
the version with a print readout. This is the ideal kind of
solution, and it is what we are working toward in all our product
development.

[PHOTO: Elliot Schreier standing at podium microphone. CAPTION:
Elliot Schreier]

Elliot Schreier

While the problem of the graphical user interface is
extremely important, I wish to speak about problems of access for
blind people to what I will call public terminals and what
service providers, consumers, and vendors can do together to
solve the difficulties. The VCR is only the most obvious example
of consumer products with complicated video displays which
include graphics and which are very difficult or impossible for
blind people to use efficiently. They are also not very easy for
many sighted people to use. In the past the solution to making
such items accessible has been to get inside the box and hard-
wire in a talking or Braille display. But these were expensive,
one-of-a-kind solutions.
For several years at the American Foundation for the Blind
we worked on developing a camera that could read the LED or LCD
display. But the technology in all such equipment changes so
quickly that the AFB engineers kept finding themselves two steps
behind. Eventually we abandoned that effort to solve the problem,
but it may still have merit.
The so-called smart house, whose owner will be able to
program instructions for appliances from a distance, will provide
a solution to this problem since the data for each programmable
device will have to pass through a single point, and a trap could
be built in, permitting the translation of the information into
an accessible format.
Perhaps infrared technology holds the solution. It is
inexpensive and doable, but line-of-sight constraints are a
problem. A user must be near the appliance with the transmitter
or receiver pointing in the correct direction in order for it to
work.
There are a number of kinds of public-access terminals that
are inaccessible to visually impaired people. Transportation
system monitors are now mounted high on walls and poorly lit.
Moreover, they may use almost incomprehensible abbreviations to
identify airports or stations. AT&T has now installed series
2,000 phones in airports and some other places which include an
entire CRT screen of information to which there is no access for
anyone who cannot read it directly. Today there are public-access
fax and photocopy machines and terminals that provide visual
displays of information to those who punch in the proper code,
often using a key pad on a touch screen which itself is unusable
by blind people. All this replaces the information desk with a
person to answer questions.
Automatic teller machines (ATMs) are now beginning to be
accessible. Some manufacturers have put Braille key-caps on the
keys, but without a method of reading the print display there is
still no way to know whether one has made an error during the
transaction. Moreover, some banks provide screens of customer
information or the option to conduct transactions in different
languages. These features are inaccessible to the blind user. The
technology is, however, already available to mark the ATM card
with a strip that would automatically cue the machine to provide
large-print text, for example.
One possible tool for solving some access problems would be
to use high-frequency radio-wave transmitters and receivers. We
might go to the Federal Communications Commission to petition for
designation of one frequency to be used for disability-related
technology. Manufacturers would then have to agree upon a method
of encoding information about what appears on the LCD displays of
equipment and in what form it appears. Legislation similar to
that which requires that televisions manufactured after July,
1993, include a closed-captioning chip for use by deaf people
could require installation of high-frequency transmitters for the
use of disabled consumers. The small radio receivers could then
be adapted to the needs of the individual user: Braille, voice,
large print, etc. There are many applications that could be
developed and some problems to work out--how does one sort out
information coming from or going to different appliances
simultaneously? These problems are solvable, and the group
assembled for this conference is the most powerful one around and
the most likely to lobby successfully for what is needed.

[PHOTO: Portrait. CAPTION: Deane Blazie]

Deane Blazie

These remarks are chiefly about the thought process involved
in research and development. The first problem is to get over the
hurdles, the obstacles to finding new solutions. One must look
hard at what it will take to solve a given problem. In 1986 I
didn't want to look at what had to be done to make a Braille
notetaker small enough to be truly portable. Having abandoned a
wooden case several years before when we were producing the
Cranmer Modified Brailler in favor of vacuum-forming the box, I
had trouble picturing the Braille 'n Speak with anything other
than a plastic case, but I was told that we should be injection-
molding the cases. I had to learn about this new technology and
risk a lot on it, but having done so, we found that the Braille
'n Speak cases are very inexpensive because they are injection-
molded.
I didn't want to hear that we should develop a disc drive
for the Braille 'n Speak. Since 1987 people have been telling me
that they wanted one. I thought buying a PC for $1,000 was a
better solution, but we finally listened to people and developed
what they wanted. It is a good piece of technology--probably the
biggest bargain we offer, considering what is in it.
The same thing happened in developing the Type 'n Speak.
People who didn't know Braille but who wanted a lightweight
notetaker began talking about having Blazie Engineering develop
such a device. Since they could use laptops with speech output, I
didn't take them seriously for a couple of years. But finally we
did design a notetaker that was small, light, and easy to use,
like the Braille 'n Speak. The decision required that we examine
our consciences to determine whether we just didn't want to do it
because it would be lots of work or whether it was not in the
best interest of users. Now I am glad that we decided to go
forward.
There are certainly still hurdles we haven't yet gotten over
in some projects: cost and technological barriers, for example.
But we must always listen to our customers and to the little
voices inside our own heads. The first tell us what we should be
doing; the second indicate what we have to overcome.
All of us in this room are service providers. Those who
think they are only technology producers had better think again.
If you don't make a point of listening to customers and giving
them what they need and want, you will have real problems. At
Blazie Engineering our problem is that, despite wanting to stay
small, we keep growing, which means hiring more people and, for
me, doing more management work. This means in turn that I have
less time to talk with customers and keep in touch with what they
want. The person who leads the company and has the ideas must be
the one to talk to the customers. I don't have a good solution to
this problem since I don't like letting anyone else run the
company either.

[PHOTO: Portrait. CAPTION: Greg Lowney]

Greg Lowney

From the Editor: Immediately before Mr. Lowney spoke, Dr.
Jernigan admitted to the group that, despite all that had been
said that morning and in previous discussions, he still did not
understand whether icons are more useful than words on a computer
screen and what the precise nature of the GUI problem is. Before
turning to his assigned subject, Mr. Lowney attempted to address
Dr. Jernigan's questions. This is the substance of what he said:

The challenge of an operating system like Windows is to
allow the screen reader to recognize the symbols on the screen so
that it can interpret them in words to the listener or Braille
reader. Mostly these problems have now been solved, thanks to
many of the people at this conference. Once the symbol-
interpretation problems have been solved, it is really no more
difficult to write a screen-reading program for applications
using a GUI-based operating system than for those using DOS. The
major difference is that, instead of there being only text to
interpret, there may now be graphics as well.
We often overlook the fact that most of the GUI applications
such as word-processing, spread-sheet, and data-base programs are
not very graphic. There may be a picture of a printer, but even
when the user points a mouse at a picture, it is also possible to
accomplish the same thing by using a keyboard with text menus.
People are moving toward the graphics-based programs because
they allow one to mix art work and text simultaneously, rather
than pasting up the art later. Also people like to see the fonts
and other graphic alterations they are writing into their
documents, and it is possible to do so with GUI.
Virtually all the GUI applications on the market today could
be produced fairly easily in a pure text mode. The reason they
are not probably has to do with limited resources. If only one
version is to be produced, it will use a GUI platform simply
because it is more popular.
Finally, one reason people prefer graphics is that lots of
them recognize pictures faster and remember what they convey
longer than they do reading words. So, while most generally-
recognized symbols--stop signs, rest rooms, and lodging signs,
for example--still incorporate words as well as pictures, there
are many people for whom the graphic is the effective element of
the symbol.
The question, however, which I intend to address today is
what does it take to make an environment like Windows, or any
other environment for that matter, accessible?
(1) The operating system must have a fundamental level of
accessibility built in. A mouse, for example, must not be the
only means of moving around the screen; keyboard access must also
be possible. Moreover, there must be a fundamental level of
access to the application program--even when specialized programs
(like screen readers) are not available--on public-access
terminals or for the use of blind people trouble-shooting
problems on a sighted colleague's computer, for example. Work is
currently being done to establish the ground rules for such
access by a group at the Trace Center in Madison, Wisconsin.
(2) There must be a range of access aids for users to choose
among, depending on their personal preferences and computer
needs--for example, a number of screen readers on the market.
(3) The creators of the application programs must do what is
necessary to make it possible for specialized vendors to write
the accessibility programs. They must, for example, provide
documentation that will allow screen-reader programmers to build
on and hook into the operating system. However, without doing
additional work, Windows programmers cannot provide more
information to other programmers about the system than it
actually has available.
(4) It is therefore necessary for operating-system
programmers to define new mechanisms designed to deliver more
information to specialized applications vendors so that they can
get the data they need to solve their problems. Examples of this
need would be identifying the active point--what I call the
visual locus--the point where the computer user's attention is
focused. Another would be to have the applications programmer
signal the presence of a graphic image and what it means. We have
not yet gotten to the point of accomplishing these things, but we
are moving in the right direction.
(5) Having defined what we want applications programs to do
in order to work well with a screen reader, how do we insure that
applications will be well-behaved? For the first time Microsoft
is now producing guidelines for general market Window-application
producers to tell them what their programs need to do to be
accessible to people with disabilities. The problem is that, no
matter how many guidelines we establish about what should happen,
there are always many more areas in which something profound can
go wrong. In addition, the mainstream-application producers
depend on distinguishing themselves from everyone else and
demonstrating why their products are superior to the
competition's. This natural tendency, which is unquestionably
strengthened by market forces, works against our efforts to
persuade these people to design programs that behave consistently
enough for the access vendors to deal with them all.
I believe (and a number of other people agree with me) that
the only way around this problem is to create new Application
Programming Interfaces (API's) that can be used by access
programmers. These programmers don't care what a particular
application does; they just need to know what is happening. How
can we make these applications producers do what is necessary?
One way is by legal requirement. In my view the Americans with
Disabilities Act is currently useless in this respect. Section
508 of the Rehabilitation Act now has more teeth than it used to,
and this is making some applications producers sit up and take
notice, but only those who do enough business with the federal
government to have the loss of that business make a difference to
their profits. Lawsuits may be another possibility, but I don't
think they will work very well, and the time for that strategy is
in the future. The profit motive would be the best method to use
if there is a way to make the argument that application producers
will lose money by refusing to accommodate blind users, but we
don't have figures to show how much business they would lose. We
don't even know how many blind people are using GUI, much less
the dollar amount of their business.
But even if an application producer were to decide to write
a well-behaved program, its programmers wouldn't have any idea
how to do it right. Perhaps the motivation to solve this problem
could be provided by establishing access certification, reports
of which could then become a part of mainstream reviews of the
application. Such certification would have to be done by some
impartial body. How much would it cost? Who would provide the
funding and the personnel? These are important questions, and the
group in this room should have a hand in answering them. My own
view is that we must find ways of tying the features needed by
the access community to attributes that will be noticed and
wanted by the larger computer-buying public. For instance, when
we are adding a new feature to Windows, I go through it trying to
think of ways we could tie something to it that would make
applications better-behaved for disabled people.
(6) Mainstream applications must be usable. A program in
which it is possible for one window to hide another is not easy
for anyone to use. Pressure should be brought to bear on
applications producers to fix such problems. This is beginning to
happen, but it is important that access issues be aired whenever
the question of usability surfaces. We must provide input at all
phases of product development.
(7) Once all the problems are solved, the end-user must know
about the solutions. The standard documentation, available to
mainstream users, must be available in accessible formats.
Additional documentation must be developed to outline the
specific ways in which the application works with access
programs--hints about how to make it work. Finally, there must be
good training materials: introductions and tutorials.

Raymond Kurzweil

From the Editor: Dr. Kurzweil was unable to stay for the
general discussion of the graphical user interface, but he made
several comments before his departure. Here they are:

One clear answer to the question of why GUI is superior to
text-based operating systems is WYSIWYG (what you see is what you
get). This means that the screen display shows the format of the
document exactly as it will look when printed. This includes type
font and size, columns, bullets, graphics, and charts and graphs.
Because information not included in the actual words of the text
is immediately and accurately communicated by a WYSIWYG
application, it is more attractive to most users.
It would be useful if we could develop an agreed-upon method
of communicating format, chart, and graph information aurally and
in Braille. That would go a long way toward solving access
problems.
But we must not concentrate exclusively on computer output
problems. The pen-based computing technology will also leave
blind people at a disadvantage. Speech recognition, which is
faster than writing with a pen, may provide the ultimate solution
to input-technology difficulties, but even so, computer input is
becoming a real problem. Meeting these challenges should be among
the goals of this group.
SUMMARY OF FRIDAY CONFERENCE DISCUSSION

Following presentations on the graphical user interface, the
remainder of the Friday program was devoted to a discussion among
service-providers, consumers, and vendors of the issues raised
during the conference. The following is a summary of that
discussion.

Jim Fruchterman: I want to try to answer the question of why
and what are the benefits of switching to the general GUI
interface. There is an element of fashion, but like the change
from the long-playing recording to the compact disk, once the
shift has occurred, there is no going back, because there are
several real benefits to the GUI operating systems. These arise
from some problems that people were having with DOS--chiefly its
relative difficulty for beginners. The success of the Macintosh
proved that people who knew nothing about computers could sit
down in front of a computer and do something constructive within
a short time. This ties into what Elliot Schreier was saying
about public terminals. They use graphics because without the
pictures many people could not use them.
There are some more technical reasons for moving to GUI. One
is memory; DOS just does not have the capacity to do some things
we want to do. Optical character recognition (OCR) is an example.
You cannot get an OCR program to run in DOS. With Windows a lot
more applications can be used together without their stepping on
each other's toes, just because there is more memory available.
Finally I would say that there is now something of a
backlash against iconography because people don't know what the
pictures mean. So now we are getting a floating help feature in
which one can pass the mouse over the icon, and a little box
appears in the corner of the screen in which words explain what
the picture means. So it is not only the blind who are having
problems with the icons.
Noel Runyan: None of us wants to lose the real benefits of
GUI, but we must encourage manufacturers to avoid meaningless
graphics and to provide us with consistent indicators so that we
can make access programs work.
Kenneth Jernigan: I urge you to comment on ways in which
this group might bring about solutions to these problems. We need
suggestions about what we should be doing together to influence
the situation.
David Andrews: Section 508 of the Rehabilitation Act
requires that the federal government purchase only equipment that
can be made accessible to disabled people. This includes
computers, operating systems, fax machines, everything. The
government is not complying with the law and is not enforcing it.
Perhaps we could agree on ways of bringing pressure to bear on
the government to live up to its regulations for itself.
Tim Cranmer: We need ongoing communication between these
conferences. I invite anyone with an electronic mail address to
give it to us so that we can put you on our research and
development discussion distribution list. We have been slowly
expanding this group for the last year and a half, and as of
January 1, 1994, we will include anyone who wants to be a part of
it. Our Internet address is nfb@access.digex.net.
Marc Sutton: This group can provide a forum through which
the vendors of screen-access products for graphical user
interfaces can define the standard by which new software can be
evaluated. Operating-system developers like Microsoft, Apple, and
the various Unix systems providers as well as the applications
developers that write for these operating systems could then have
a clear standard to determine whether or not they meet the
requirements. Having such a standard will make it easier for us
to file lawsuits using the teeth of the ADA and Section 508. I
also think that, working together, consumer organizations and
vendors could create tutorials with tactile screens that could be
very helpful. Some of this is already being done.
Paul Fontaine: I am from the General Services
Administration. Access will not be achieved by one strategy
alone. This group has tremendous political influence, and you
should use it to identify all the pressure points--economic,
legal, legislative, etc.--and then apply the necessary pressure.
The amendments to the Rehabilitation Act mandated that Section
508 be updated. One of my projects is to get several governmental
agencies together to update the old Section 508 regulations on
federal procurement. The political reality is that in the current
administration there is less emphasis on regulations and
mandates. They are trying to make government run more like
business. So, while it is important to rewrite the regulations,
the political reality is that they will have less influence than
they might have had in previous administrations. Finally, I would
point out that the DOS problems were completely solved about the
time that DOS ceased being generally used. I think that the
Windows solutions will probably emerge about the time that
Windows is no longer on the cutting edge. Maybe this group needs
to focus its efforts on solving the problems with the operating
systems that have not yet been released. Both Microsoft and IBM
have radically different systems in development. That is where we
need to be looking.
Kenneth Jernigan: You are, of course, right except that
there are a lot of employed blind people out there who are going
to become unemployed while Windows and the other operating
systems are being used if we don't find a way of helping them.
Jim Thatcher: The resources required to produce these new
operating systems are rising exponentially, and the funds that
businesses like IBM are prepared to spend on access programs are
shrinking rapidly. The architects of new systems are interested
in helping, but when the financial crunch comes, business demands
will win out over what is right, unless the impetus to keep the
systems and programs accessible comes from above. This group must
get to the CEOs of these companies and persuade them of the
importance of maintaining a corporate commitment to
accessibility.
Tuck Tinsley: Is it possible and would it be helpful for the
American Printing House to gather a group together to produce a
tutorial for school-age children on Windows? [Chorus of "No" and
"We don't know enough to do it."]
Curtis Chong: Anything you could do to provide Braille and
tactile graphic materials to teach blind children about GUI-based
programs would be useful, but you must be careful to choose the
right GUI. Probably more kids in school are exposed to Apple
programs than anything else, and we have access programs that
have solved the screen-review problems for that system, so such a
tutorial might be helpful.
David Andrews: Mr. Fontaine says that new federal
procurement regulations will be written, but we must not forget
that in the meantime there are regulations on the books which
should be enforced. We cannot go to the pressure points until we
are agreed on what we want to ask for, so arriving at that
agreement is a very important first step.
Noel Runyan: It is easy to be depressed when looking at the
complexity of the GUI problem facing this field, but I remind you
that, when computer terminals were first on the horizon in the
sixties, there was lots of talk about how blind programmers were
bound to lose their jobs because their computers would no longer
be accessible. But we found good solutions to the problems that
faced us then, and I feel optimistic that we will do it again
this time. Both the government and business are beginning to
understand the importance of accessibility, and that will help.
Deane Blazie: One substantive thing we could do right now is
to go through the Federal Register and determine where the
largest automatic data processing (ADP) contracts are going. I'm
sure Windows would be included. Then the General Services
Administration (GSA) could demand that an access clause be
written into the contract in compliance with Section 508.
Kenneth Jernigan: Rather than GSA, we should get some of our
friends in the Senate and the House to approach the bureaucrats.
GSA officials can ask, but when it's the chairman of a key
committee, the bureaucrats will be much more interested in
complying.
Greg Lowney: Of the three largest procurements I am aware of
right now--the IRS, the Justice Department, and the Coast Guard--
all three have access language; two quote verbatim the guidelines
that the GSA put forward. These are outdated, and they do need to
be revised. The question is whether they will be enforced. There
are two mechanisms for accomplishing this. Although the people
letting the bids can choose to ignore the guidelines, they can
insist they be met at the time the bid is accepted.
Alternatively, if a contract is awarded without compliance,
another company can challenge the bid.

Friday Afternoon Session

Dr. Jernigan had asked Curtis Chong to gather a small group
together during lunch to discuss actions that conferees might
jointly take at the conclusion of the meeting or before the next
one. The group reported that another gathering of technologically
sophisticated people should soon be called together to discuss
the GUI at greater length and in greater depth. Details of a
possible certification system should be worked out, and thought
should be given to providing better implementation for Section
508. Dr. Jernigan then asked the group to bring a statement or
resolution to the service providers and consumers meeting
Saturday morning for their consideration.
Lloyd Rasmussen: A different graphics problem is facing
students and math teachers in high schools and community
colleges. It is caused by small graphics calculators. They are
too tiny to make talk, and there are increasing numbers of people
who don't know how to deal with instructional methods that depend
on this type of device.
Gerry Braak: I wish to urge increased attention to the
problem of access to digital readouts. Blind professionals, like
physiotherapists, must ask other people to read them the data
about their patients provided by the LCD displays on their
equipment.
Jim Sanders: We at the Canadian National Institute for the
Blind are very concerned about the problem of marketing. If
people don't know that the technology they need exists, in their
ignorance they will live increasingly restricted lives. We must
find better ways to educate people about what is out there and
how they can get it. Last March the CNIB launched the Techni-bus,
which is a forty-foot coach, fitted with display areas and as
much technology as we could cram into it. It has traveled 75,000
miles and visited 150 communities. We are still gathering the
results of this effort, which was complex and expensive, but it
has certainly educated many more people about the technology
available to them than we have ever reached before. The CNIB is
eager to hear from anyone interested in working on this marketing
problem.
Judy Dixon: With respect to the National Information
Infrastructure, two entities are being formed. One is a national
task force, comprised of people who understand the concepts
involved in this technology, and the other is a national advisory
group, made up of policy makers. The blindness field should be
represented on both of these bodies to insure that we will have
the access we have come to depend upon. I also hope that those
involved would be interested in Braille and other forms of access
beyond speech.
Frederick Downs: I am the Director of Prosthetics and
Sensory Aid Service for the Department of Veterans Affairs. I pay
for the special devices prescribed for the nation's veterans. I
need immediate information about emerging technology to
disseminate to 172 medical centers. I buy in volume for national
and, in future, for state agencies. By Congressional decree I
have absolute control over a budget which last year was
$240,000,000. Volume purchasing has made a big difference in
prosthetic purchases, and we are now making a big push to help
the blind. I hope to establish a cost-effective system for
purchasing which could benefit state agencies and others. I
welcome comment from anyone who can help.
John Brabyn: This group should take seriously its role in
and ability to foster targeted research. Dr. Cranmer's R and D
Committee has been very helpful in identifying the correct
problem and then encouraging people to focus on solving it. The
problems that have been identified here are important.
Mary Frances Laughton: I am pleased to say that the Canadian
government has just announced its plans for the Canadian
information superhighway, and the announcement included reference
to the importance of insuring that all disabled people have
access to it. We are doing some interesting research and
development, and I would be happy to put anyone on our mailing
list.
Shirley Dupmeier: I am a consumer who knows nothing about
technology. My plea is that, when you are doing your research and
marketing, remember those of us on the low end of the technology
ladder. We are out there, and we need your help too.
Paul Edwards: When corporations upgrade to new levels of
computer technology, organizations of and for the blind might
urge them to consider the tax advantage of making the old
equipment available to disabled people who would not otherwise
have access to it. Then consumer groups, Tech Act groups, and
perhaps state agencies could see about adding the access
technology to it to make it available to many more blind people
than would otherwise be able to buy it.
Caryn Navy: It is important to remember the low-end
consumer, but it is also important to remember to budget some
funding for high-end needs as well. So many of the new programs
require lots of memory and speed. I would like to commend
Arkenstone for making its speech driver library available to
vendors. Their generosity is a good model.
Susan Spungin: On behalf of the American Foundation for the
Blind and (I suspect) everyone else here, I would like to thank
the National Federation of the Blind and Dr. Jernigan for
bringing these three groups together for this important
discussion.









******************************
If you or a friend would like to remember the National Federation of the Blind in your will, you can
do so by employing the following language:
"I give, devise, and bequeath unto National Federation of the Blind, 1800 Johnson Street, Baltimore,
Maryland 21230, a District of Columbia nonprofit corporation, the sum of $_____ (or "_____ percent of my net
estate" or "The following stocks and bonds: _____") to be used for its worthy purposes on behalf of blind
persons."
******************************

SUMMARY OF THE SATURDAY CONFERENCE DISCUSSION

At the close of the Friday conference session, the vendors
departed, leaving service-providers and consumers to discuss
issues together on Saturday morning. Here is the substance of
that discussion:

Dr. Jernigan began by asking the group if there was interest
in conducting another conference in about two years. He indicated
that the NFB would be pleased to host such a meeting. He expected
that by that time there would be a low-vision aids center as part
of the International Braille and Technology Center for the Blind.
The group unanimously indicated its interest in doing so.
Euclid Herie: At the last conference I agreed to chair a
committee to look into group purchasing, not with an eye to
hurting vendors, but rather to take advantage of economies of
scale. There was also a hope that we might be able to encourage
the development of products or increase their availability
through volume orders. In the past few months we in Canada have
organized a large consortium of our own which includes provincial
entities as well as the CNIB. I have been talking recently about
this matter with people in the U.K., France, Australia, Spain,
New Zealand, Hong Kong, and Japan. I think that the time is right
for developing a world market. Because Gary Magarrell and Jim
Sanders of my staff will be working together with our program, I
would suggest that one or both of them chair this committee in
the next two years.
Judy Dixon: I recommend that, at the Third U.S./Canada
Conference, part of the agenda be devoted to non-computer
technology: creative use of bar-code-scanning technology, talking
book machines, slates and styluses, and who knows what all. What
do we need? What is out there that people don't know about?
Dr. Jernigan asked if she would do the necessary research
and make the presentation at the conference, and she agreed to.
Curtis Chong: A group representing vendors, service
providers, and consumers got together yesterday and drew up the
following resolution:

2nd U.S./Canada Conference on
Technology for the Blind

Resolution

WHEREAS, throughout North America the graphical user
interface (GUI) is becoming more widely used on today's computer
systems; and
WHEREAS, because GUI applications are incompatible with
text-based computer access technology for the blind, their
increased use effectively reduces the ability of the blind to use
computers independently without resorting to relatively new and
immature approaches; and
WHEREAS, if blind people are to continue to enjoy the
benefits that independent use of the computer has brought, access
to GUI applications must be ensured; and
WHEREAS, for the blind full access to GUI applications can
only be achieved by a cooperative effort between blind consumers,
those who develop GUI access technology, and developers of GUI
platforms and applications; and
WHEREAS, in the United States stronger enforcement of laws
such as Section 508 of the Rehabilitation Act and the Americans
with Disabilities Act will help to ensure that accessibility to
GUI applications for the blind is a primary consideration, as
opposed to an afterthought; and
WHEREAS, in Canada the provisions of the Canadian Charter of
Rights and Freedoms and applicable provincial human rights
legislation specifically prohibit discrimination on the basis of
disability: Now, therefore
BE IT RESOLVED, by the 2nd U.S./Canada Conference on
Technology for the Blind, in meeting assembled this sixth day of
November, 1993, in the city of Baltimore, Maryland, that this
conference call upon all developers of GUI applications to take
advantage of any standard accessibility application programming
interfaces (APIs) that may be provided in future operating
systems; and
BE IT FURTHER RESOLVED, that this conference call upon the
General Services Administration (GSA) in the United States to
take steps to enforce more strongly Section 508 of the
Rehabilitation Act, including educating its contract negotiators
about what it means for the blind to have true access to a GUI
application; and
BE IT FURTHER RESOLVED, that this conference call upon
federal and provincial governments in Canada and all groups and
organizations over which the Canadian Charter of Rights and
Freedoms and applicable provincial human rights legislation have
effect to take steps to ensure that discrimination on the basis
of disability is not perpetuated by denying access to computer
technology; and
BE IT FURTHER RESOLVED, that this conference call for the
convening of a future meeting at which the GUI accessibility
problem for the blind can be discussed in depth by consumers,
developers of GUI access technology, and key developers of GUI
platforms, with a view to making more substantive recommendations
in this area.

Tim Cranmer: Let us be careful in stipulating that consumers
be part of the conference we are proposing to call to discuss GUI
that we do not try to mandate the establishment of standards for
evaluating such technology. It will be some time before any of us
will be knowledgeable enough to do so intelligently.
President Maurer suggested that some of the applications
producers might be concerned about the antitrust implications of
a conference of the type being contemplated. David Andrews
thought they would be more likely to fear losing their
competitive advantage by discussing these matters with other
producers. Dr. Jernigan said that we would look into getting a
Justice Department letter giving its blessing to any solutions
coming out of the conference.
With all these understandings in mind, the group voted
unanimously to approve the resolution.
Elliot Schreier: Perhaps this group should think about
establishing a periodic publication that would collect in one
place some of the expertise we have. For example, I have copied
and circulated David Andrews's recent evaluation of reading
machines, published in the Braille Monitor. It would be helpful
to have materials like that put together.
An exchange then took place between Paul Edwards and Kenneth
Jernigan. Mr. Edwards said that there continues to be a problem
in providing appropriate and sufficient training for blind people
who are beginning to use computers. We should see that there is
some way of determining who is equipped to provide such training.
Dr. Jernigan asked if he wasn't getting close to suggesting the
establishment of standards. Edwards said no, but all the groups
here should work together in a collaborative way to agree upon
the content of the training.
Gary Magarrell: Agencies cannot do all the necessary
training in this area, but they should make sure that the people
for whom they provide equipment have a tutorial component in the
rehabilitation package. It is easier to raise the necessary funds
to have someone associated with the vendor train a client than it
is to maintain a person on staff to do that training for all
clients. The vendors must, of course, then be certain that the
people they have hired as trainers know the equipment well enough
to train each user appropriately.
Kenneth Jernigan: Consider for a moment the issues facing
the NFB with the International Braille and Technology Center for
the Blind. We have now invested over a million dollars in
equipment. We have just put six hundred thousand into remodeling
the new facility. We have the ongoing cost of David Andrews's
salary and that of the support staff working with him. And really
we should have at least one more professional down there. We
provide all kinds of services to the public: evaluations of
technology, tours, consultation, and training. We will find a way
to keep this program going because it is important, but it would
be easier if we could establish some means of generating funds
through center projects.
David Andrews: I understand the problem that Paul alludes
to. The two biggest problems we face in the technology field are
financing and training, and I don't know how to solve either of
them. The field is still so new that I don't believe we can
establish requirements for formal educational credentials; a
number of us are self-taught and couldn't meet such
qualifications. Frankly, most of the worst teaching is being done
by people who are just not good teachers. They know the material,
but they are not skilled in communicating it.
Euclid Herie: We at the CNIB intend to establish a formal
relationship under contract with the NFB to take advantage of the
facilities here. We can save thousands of dollars each year by
being able to make informed decisions about equipment purchases
after coming here and talking with Mr. Andrews.
Several things that have been said here during this
conference have been particularly important. It is critical that
we find a way to get ahead of the technology problems so that we
are not always winning a battle only to find that the field of
combat has completely changed. Also, our Techni-bus reached
17,000 people in eight months, people who had not known that such
equipment was to be had. I suggest that in a month or so a small
group of people get together and listen to the tapes of this
conference in order to pick out the most important ideas. They
could then be circulated to the participants for individual
action as seems appropriate.
Kenneth Jernigan: On the matter of funding this technology
center--we would like to have some help in funding it if we can
get it, but, on the other hand, we don't want to put the resource
out of the reach of people who need it. Striking the right
balance is a problem.
Shirley Dupmeier: We must not sell short the intelligence
and energy of blind people. We must pass on information among
ourselves and make our own decisions and not think of ourselves
as helpless victims at the mercy of the vendors. But I would also
like to request that at the next conference cassette versions of
the agenda be available for people like me for whom neither print
nor Braille is a good alternative. Dr. Jernigan thanked her for
bringing the point to his attention and assured her that it would
be done.
Kurt Cylke: I think it will be important for the people who
agree to do things after the conference to report back about what
they have accomplished before the next meeting. We have heard
about two of the five things agreed upon at the last meeting, and
that points up the importance of follow-through.
Gerry Braak: I have been a proponent for a long time of
getting technology demonstrations to outlying areas as has been
done with the Techni-bus, but many of the people who see the
equipment comment afterward that there is no way they can afford
it. The economic component of this situation is very important.
Moreover, when youngsters leave school, where they have had
experience with computers, they will lose their skills quickly if
they do not immediately get jobs that use those skills. This is
another aspect of the problem.
Paul Edwards: I have a motion for consideration: moved that
this conference express its concern over the poor quality of
training available to many blind people in the field of
technology and urge all parties involved in work with technology
to seek ways to improve this unfortunate situation. The motion
died for lack of a second.
In closing Dr. Jernigan made the following remarks: We of
the National Federation of the Blind pledge to you that we will
continue to maintain the International Braille and Technology
Center for the Blind, bringing together every piece of Braille-
producing technology and every screen-reading program that we
learn about. We will remove the outdated ones and do our best to
keep this facility state-of-the-art. We will also staff the
Center appropriately. We contemplate establishing a low-vision
aids center as well, but we will not do so unless we can
anticipate having sufficient funds to make it state-of-the-art as
well. We do not need just one more run-of-the-mill low-vision
center in this country.
We have also placed before you the question of how we might
be assisted to do this work which must be done. Rehabilitation of
the blind and technology are now inextricably joined. That is why
we established this Center and why we intend to keep it updated.
It is in the best interest of every state and private agency and
every blind consumer that we find the financing to do so. Working
alone, we cannot possibly make this Center as successful as it
could be. Since it is already better than anything else of its
kind in the world, if we do not receive some assistance, we will
never know how good it might have been.
The first conference, held two years ago, succeeded in
meeting its goals: to enable us all to know each other better and
begin to work together in new ways, and to establish ourselves
and this field as

  
a force in specialized technology. In addition,
we went away with a greater knowledge of and appreciation for
technology and the part it was playing and would continue to play
in the lives of blind people. Because the decision-makers in the
field were here, what we received at that conference has helped
determine the direction of work with the blind in these last two
years. I also believe that the conference stimulated some joint
purchasing and some new research and development of technology.
This conference, too, has met all of those goals again, but
in addition, it has given us a vision of what the future can be
and what a nightmare it will be if we do not solve the technology
problems that face us now. Technology will be a part of our lives
in the future, whether we will or no, and the people here will
determine what impact blind people have on that technology. You
notice that I did not say "what impact technology will have on
blind people." There is no question that that effect will be
profound, but the impact blind people will have on the technology
that shapes our lives will be determined by the people at this
conference. And we have helped set the tone for that. Personally,
I am much more concerned about this aspect of the conference than
I am about any specific technology question we have discussed.
Each of you has honored the rest of us by your
participation, and our collective power and potential are greater
than the sum of the individuals who have been here. It has been
an honor for the National Federation of the Blind to have you
here, and we pledge to do our best to welcome you again in 1995
and to plan an even better conference at that time. In the
interim we will do all that we can to promote usable and useful
technology and to work cooperatively with you to create better
technology for blind people.


next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT