Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 224

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Sunday, 19 Oct 1986      Volume 4 : Issue 224 

Today's Topics:
Mathematics - PowerMath,
Learning - Neural Networks & Connectionist References,
Expert Systems - ESE

----------------------------------------------------------------------

Date: 15 Oct 86 19:30:47 GMT
From: hao!bill@seismo.css.gov (Bill Roberts)
Subject: Algebraic manipulators for the Mac

Has anyone in netland heard of any algebraic manipulator systems for the
MacIntosh? I recently saw were a company called Industrial Computations Inc.
of Wellesley, MA is marketing a program called "PowerMath". The ad reads

Type in your problem, using conventional math notation, and
PowerMath will solve your calculus, algebra and matrix
problems. PowerMath does factorials, summations, simultaneous
equations, plots, Taylor series, trigonometry and allows
unlimited number size.

That last statement ("...unlimited number size.") hints at PowerMath being a
symbolic computation engine as opposed to an equation solver like TKSolver.
Thanks in advance for any input.

Bill Roberts
NCAR/HAO
Boulder, CO
!hao!bill

"...most people spend their lives avoiding intense situations,
a Repo man seeks them out!"

------------------------------

Date: 12 Oct 86 23:10:00 GMT
From: uiucuxa!lenzini@uxc.cso.uiuc.edu
Subject: To: Bob Caviness


To : Bob Caviness

Sorry about this posting but I can't seem to get through to Bob Caviness
at the University of Del.

Here are a couple of integrals that you can cut MACSYMA loose on. I've been
trying to use the program myself but the results I've been getting are
unbelievably complex (Read 8 page constants that I can't seem to simplify).
Hopefully you have expanded the integration capabilities enough to handle this.

Thanks again.

inf
/
!
!(A + B*(x)^(1/2))^2 + (C*x + B*(x)^(1/2))^2 D
I = !--------------------------------------------- * ----------- cos(E*x)dx
1 !(A + B*(x)^(1/2))^2 + (x + B*(x)^(1/2))^2 D^2 + x^2
!
/
0



I = same integral as I without the cos(E*x) term
2 1



Any help would be greatly appreciated.

Thanks in advance.

Andy Lenzini
University of Illinois.

------------------------------

Date: 17 Oct 86 23:55:03 GMT
From: decvax!dartvax!merchant@ucbvax.Berkeley.EDU (Peter Merchant)
Subject: Re: Algebraic manipulators for the Mac

> ...I recently saw were a company called Industrial Computations Inc.
> of Wellesley, MA is marketing a program called "PowerMath". The ad reads
>
> Type in your problem, using conventional math notation, and
> PowerMath will solve your calculus, algebra and matrix
> problems. PowerMath does factorials, summations, simultaneous
> equations, plots, Taylor series, trigonometry and allows
> unlimited number size.
>
> That last statement ("...unlimited number size.") hints at PowerMath being a
> symbolic computation engine as opposed to an equation solver like TKSolver.
> Thanks in advance for any input.
> Bill Roberts

I had a chance to use PowerMath and was severely impressed. It does all sorts
of mathematical functions and has a very nice Macintosh interface. I have a
feeling, though, that is program was originally designed for a mainframe.

I would love to see PowerMath run on a Mac with a Prodigy upgrade, or maybe
a HyperDrive 2000. I used one on a 512K Mac and, while it was very good,
was the most slowest (yes, I meant to do that) program I had ever seen. The
program took minutes to do what TK!Solver seconds.

On the other hand, it did do everything it advertised. Made good graphs, too.
If time is not a problem for you, I'd really suggest it. If anyone has detes
on it running on a Prodigy upgrade, PLEASE LET ME KNOW!
--
"Do you want him?! Peter Merchant
Or Do you want me!?
'Cause I want you!"

------------------------------

Date: 18 Oct 86 15:00:39 GMT
From: clyde!watmath!watnot!watmum!bwchar@caip.rutgers.edu (Bruce Char)
Subject: Re: Algebraic manipulators for the Mac

There is an article by two of the authors of PowerMath in the Proceedings of
the 1986 Symposium on Symbolic and Algebraic Computation (sponsored by
ACM SIGSAM): "PowerMath, A System for the Macintosh", by J. Davenport and
C. Roth, pp. 13-15. Abstract from the paper:

PowerMath is a symbolic algebra system for the MacIntosh
computer. This paper outlines the design decisions that were
made during its development, and explains how the novel
MacIntosh environment helped and hindered the development of the
system. While the interior of PowerMath is fairly conventional, the
user interface has many novel features. It is these that make
PowerMath not just another microcomputer algebra system.


Bruce Char
Dept. of Computer Science
University of Waterloo

------------------------------

Date: 17 Oct 86 05:34:57 GMT
From: iarocci@eneevax.umd.edu (Bill Dorsey)
Subject: simulating a neural network


Having recently read several interesting articles on the functioning of
neurons within the brain, I thought it might be educational to write a program
to simulate their functioning. Being somewhat of a newcomer to the field of
artificial intelligence, my approach may be all wrong, but if it is, I'd
certainly like to know how and why.
The program simulates a network of 1000 neurons. Any more than 1000 slows
the machine down excessively. Each neuron is connected to about 10 other
neurons. This choice was rather arbitrary, but I figured the number of
connections would be proportional the the cube root of the number of neurons
since the brain is a three-dimensional object.
For those not familiar with the basic functioning of a neuron, as I under-
stand it, it functions as follows: Each neuron has many inputs coming from
other neurons and its output is connected to many other neurons. Pulses
coming from other neurons add or subtract to its potential. When the pot-
ential exceeds some threshold, the neuron fires and produces a pulse. To
further complicate matters, any existing potential on the neuron drains away
according to some time constant.
In order to simplify the program, I took several short-cuts in the current
version of the program. I assumed that all the neurons had the same threshold,
and that they all had the same time constant. Setting these values randomly
didn't seem like a good idea, so I just picked values that seemed reasonable,
and played around with them a little.
One further note should be made about the network. For lack of a good
idea on how to organize all the connections between neurons, I simply connect-
ed them to each other randomly. Furthermore, the determination of whether
a neuron produces a positive or negative pulse is made randomly at this point.
In order to test out the functioning of this network, I created a simple
environment and several inputs/outputs for the network. The environment is
simply some type of maze bounded on all sides by walls. The outputs are
(1) move north, (2) move south, (3) move west, (4) move east. The inputs are
(1) you bumped into something, (2) there's a wall to the north, (3) wall to
the south, (4) wall to the west, (5) wall to the east. When the neuron
corresponding to a particular output fires, that action is taken. When a
specific input condition is met, a pulse is added to the neuron corresponding
to the particular input.
The initial results have been interesting, but indicate that more work
needs to be done. The neuron network indeed shows continuous activity, with
neurons changing state regularly (but not periodically). The robot (!) moves
around the screen generally winding up in a corner somewhere where it occas-
ionally wanders a short distance away before returning.
I'm curious if anyone can think of a way for me to produce positive and
negative feedback instead of just feedback. An analogy would be pleasure
versus pain in humans. What I'd like to do is provide negative feedback
when the robot hits a wall, and positive feedback when it doesn't. I'm
hoping that the robot will eventually 'learn' to roam around the maze with-
out hitting any of the walls (i.e. learn to use its senses).
I'm sure there are more conventional ai programs which can accomplish this
same task, but my purpose here is to try to successfully simulate a network
of neurons and see if it can be applied to solve simple problems involving
learning/intelligence. If anyone has any other ideas for which I may test
it, I'd be happy to hear from you. Furthermore, if anyone is interested in
seeing the source code, I'd be happy to send it to you. It's written in C
and runs on an Atari ST computer, though it could be easily be modified to
run on almost any machine with a C compiler (the faster it is, the more
neurons you can simulate reasonably).

[See Dave Touretzky's message about connectionist references. -- KIL]


--
| Bill Dorsey |
| 'Imagination is more important than knowledge.' |
| - Albert Einstein |
| ARPA : iarocci@eneevax.umd.edu |
| UUCP : [seismo,allegra,rlgvax]!umcp-cs!eneevax!iarocci |

------------------------------

Date: 15 Oct 86 21:12 EDT
From: Dave.Touretzky@A.CS.CMU.EDU
Subject: the definitive connectionist reference

The definitive book on connectionism (as of 1986) has just been published
by MIT Press. It's called "Parallel Distributed Processing: Explorations in
the Microstructure of Cognition", by David E. Rumelhart, James H. McClelland,
and the PDP research group. If you want to know about connectionist models,
this is the book to read. It comes in two volumes, at about $45 for the set.

For other connectionist material, see the proceedings of IJCAI-85 and the
1986 Cognitive Science Conference, and the January '85 issue of the
journal Cognitive Science.

-- Dave Touretzky

PS: NO, CONNECTIONISM IS NOT THE SAME AS PERCEPTRONS. Perceptrons were
single-layer learning machines, meaning they had an input layer and an
output layer, with exactly one learning layer in between. No feedback paths
were permitted between units -- a severe limitation. The learning
algorithms were simple. Minsky and Papert wrote a well-known book showing
that perceptrons couldn't do very much at all. They can't even learn the
XOR function. Since they had initially been the subject of incredible
amounts of hype, the fall of perceptrons left all of neural network
research in deep disrepute among AI researchers for almost two decades.

In contrast to perceptrons, connectionist models have unrestricted
connectivity, meaning they are rich in feedback paths. They have rather
sophistcated learning rules, some of which are based on statistical
mechanics (the Boltzmann machine learning algorithm) or information
theoretic measures (G-maximization learning). These models have been
enriched by recent work in physics (e.g., Hopfield's analogy to spin
glasses), computer science (simulated annealing search, invented by
Kirkpatrick and adapted to neural nets by Hinton and Sejnowski), and
neuroscience (work on coarse coding, fast weights, pre-synaptic
facilitation, and so on.)

Many connectionist models perform cognitive tasks (i.e., tasks related to
symbol processing) rather than pattern recognition; perceptrons were mostly
used for pattern recognition. Connectionist models can explain certain
psychological phenomena that other models can't; for an example, see
McClelland and Rumelhart's word recognition model. The brain is a
connectionist model. It is not a perceptron.

Perhaps the current interest in connectionist models is just a passing fad.
Some folks are predicting that connectionism will turn out to be another
spectacular flop -- Perceptrons II. At the other extreme, some feel the
initial successes of ``the new connectionists'' may signal the beginning of
a revolution in AI. Read the journals and decide for yourself.

------------------------------

Date: 15 Oct 86 06:17:23 GMT
From: zeus!levin@locus.ucla.edu (Stuart Levine)
Subject: Re: Expert System Wanted

In article <2200003@osiris> chandra@osiris.CSO.UIUC.EDU writes:

>There is an expert system shell for CMS. It is called PRISM.
>PRISM is also called ESE (expert system environemnt).
>ESE is available from IBM itself. It is written in lisp and was most
>probably developed at IBM Watson Research Labs.
>
Could you give us more info. When we checked into the availability
of PRISM, we found that IBM was NOT making it available.
It would be interesting to know if that has changed.

Also, does it run in LISP (as in a lisp that someone would actually
own), or in IBM LISP?

------------------------------

Date: 15 October 1986, 20:54:09 EDT
From: "Fredrick J. Damerau" <DAMERAU@ibm.com>
Subject: correction on ESE

ESE, (Expert System Environment),
is actually PASCAL-based, not LISP, and was developed at
the Palo Alto Scientific Center, not Yorktown Research.

Fred J. Damerau, IBM Research (Yorktown)

------------------------------

Date: Wed 15 Oct 86 17:05:23-PDT
From: Matt Pallakoff <PALLAKOFF@SUMEX-AIM.ARPA>
Subject: corrections to Navin Chandra note on AIList Digest

Navin,

I saw your note on IBM's expert system environment (ESE). I worked
one summer with the group that developed it. First, it's no longer
called PRISM. They changed that fine name, used throughout the
research and development, to Expert System Development Environment/
Expert System Consultation Environment, the two subsystems of ESE which
are sold separately or together. (I don't think they have reversed this
decision since I left.)
Secondly, It is written in PASCAL, not LISP. Finally, it was created
at the IBM Research Center in Palo Alto, California (where I worked).
I don't know a tremendous amount about it (having spent only a couple
months working on interfaces to it) but I might be able to give you some
general answers to specific questions about it.

Matt Pallakoff

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Francesco's profile picture
Francesco Arca (@Francesco)
14 Nov 2024
Congratulations :)

guest's profile picture
@guest
12 Nov 2024
It is very remarkable that the period of Atlantis’s destruction, which occurred due to earthquakes and cataclysms, coincides with what is co ...

guest's profile picture
@guest
12 Nov 2024
Plato learned the legend through his older cousin named Critias, who, in turn, had acquired information about the mythical lost continent fr ...

guest's profile picture
@guest
10 Nov 2024
الاسم : جابر حسين الناصح - السن :٤٢سنه - الموقف من التجنيد : ادي الخدمه - خبره عشرين سنه منهم عشر سنوات في كبرى الشركات بالسعوديه وعشر سنوات ...

lostcivilizations's profile picture
Lost Civilizations (@lostcivilizations)
6 Nov 2024
Thank you! I've corrected the date in the article. However, some websites list January 1980 as the date of death.

guest's profile picture
@guest
5 Nov 2024
Crespi died i april 1982, not january 1980.

guest's profile picture
@guest
4 Nov 2024
In 1955, the explorer Thor Heyerdahl managed to erect a Moai in eighteen days, with the help of twelve natives and using only logs and stone ...

guest's profile picture
@guest
4 Nov 2024
For what unknown reason did our distant ancestors dot much of the surface of the then-known lands with those large stones? Why are such cons ...

guest's profile picture
@guest
4 Nov 2024
The real pyramid mania exploded in 1830. A certain John Taylor, who had never visited them but relied on some measurements made by Colonel H ...

guest's profile picture
@guest
4 Nov 2024
Even with all the modern technologies available to us, structures like the Great Pyramid of Cheops could only be built today with immense di ...
Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT