Copy Link
Add to Bookmark
Report

IRList Digest Volume 1 Number 19

eZine's profile picture
Published in 
IRList Digest
 · 1 year ago

IRList Digest           Monday, 4 Nov 1985      Volume 1 : Issue 19 

Today's Topics:
AI-ED Extracts - Seminar: Learning from Multiple Analogies
- Seminar: User Interface Management Systems
- Query&Announcement: Scheme for Thermodynamics
- Query: Tutorial Dialog Systems Wanted
- Description: GUIDON Project
- Query: Statistics approaches to medical diagnosis?
Cog-Sci Seminars - IFO DB Model, Brittleness/Tunnel-Vision in Know. Rep.,
Government-Binding Parser
- Knowledge-Based Approach to Lang. Production
- Edge Detection
- Short Term Memory, Characterizing Expert Systems,
Lang. Lab Materials, Interfaces Handling Misconceptions

----------------------------------------------------------------------

From: Bernard Silver <SILVER%mit-mc.arpa@CSNET-RELAY>
Date: Fri, 18 Oct 85 23:44:39 EDT
Subject: Seminar - Learning From Multiple Analogies (GTE)


GTE LABS INCORPORATED
MACHINE LEARNING SEMINAR

Title: Learning from Multiple Analogies

Speaker: Mark H. Burstein
BBN Labs.

Date: Monday October 21, 10am

Place: GTE Labs
40 Sylvan Rd, Waltham MA 02254


Students learning about an unfamiliar new subject under the guidance
of a teacher or textbook, are often taught basic concepts by analogies
to things that they are more familiar with. Although this seems to
be a very powerful form of instruction, the process by which students
make use of this kind of instruction has been little studied by AI
learning theorists. A cognitive process model of how students make
use of such analogies will be presented. The model was motivated by
examples of the behavior of several students who were tutored on the
programming language BASIC, and focusses in detail on the development
of knowledge about the concept of a program variable, and its use in
assignment statements. It suggests how several analogies can be used
together to form new concepts where no one analogy would have been
sufficient. Errors produced by one reasoning from one analogy can
be corrected by another.

As an illustration of the main principles of the model, a computer
program, CARL, is presented that learns to use variables in BASIC
assignment statements. While learning about variables, CARL generates
many of the same erroneous hypotheses seen in the recorded protocols
of students learning the same material given the same set of analogies.
The learning process results in a single target model that retains
some aspects of each of the analogies presented.

For more information, contact Bernard Silver (617) 576-6212

------------------------------

From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY>
Date: Sat, 19 Oct 85 22:07 EDT


USER INTERFACE MANAGEMENT SYSTEMS

MARK GREEN
DEPARTMENT OF COMPUTING SCIENCE
UNIVERSITY OF ALBERTA

The user interface is the part of the program that stands between the user and
the other components of the program. Experience has shown that the
construction of good user inter- faces is both expensive and time consuming.
It has also been observed that the basic structure of user interfaces does not
change radically over a wide range of applications. Recently there has been a
trend towards isolating the user interface in a separate component designed by
an expert in human-computer interaction. This leads to the idea of a user
interface management system. A user interface management system (UIMS)
facilitates the design, implementation and maintenance of user interfaces. The
main goal of UIMSs is to reduce the amount of time and effort required to
produce a user interface.

In this talk the basic principles of user interface management systems are
presented along with a discussion of the University of Alberta UIMS. This UIMS
is based on the Seeheim model of user interfaces developed at the
EUROGRAPHICS/IFIPS Workshop on User Interface Management in November 1983. The
University of Alberta UIMS supports the interactive design of the physical
representation of the user interface, three notations for defining the dialogue
structure, and a flexible interface between the user interface and the other
components of the program.

3pm Tuesday, October 22, 1985
Alumni Hall - Towne Building
University of Pennsylvania

------------------------------

From: meltsner%athena.mit.edu@CSNET-RELAY
Subject: Scheme for Thermo.
Date: 21 Oct 85 14:22:51 EDT (Mon)


This is both a request and an announcement:

I have been working on a "microworld" for thermodynamics. The world
would support common thermodynamic objects like gases and solids, and would
allow objects to be interconnected, and equilibriums found with a variety
of constraints on mass, energy, volume, etc. transfer. ("would" because
not all features have been implemented)

It has proved to be useful in an introductory (graduate) level thermo course,
and I am currently planning to rewrite it to remedy a number of deficiencies.

Questions:

1) Would you prefer an english-language (command-driven) interface or a
windowed one? Does "make an gas with 10 moles of neon temperature 298" make
more sense than a dialog box after one has selected a menu item "Make Gas"?

2) Are windowed graphics useful? What sort of graphics?(x-time, x-y, contour,
dials, gauges, empty/full bar indicators)

3) Would you sacrifice speed for numeric accuracy? (1% accuracy vs. .01% might
triple the iteration time)

4) How important are on-line help, explanations of internal processes,
examples?

5) What would you like to have it run on? (4.2 Unix, Macintosh, IBM, ??)

6) What would it have to run on to be useful for use in a class at your
institution?

The program is currently written in T, and will be moved to Cscheme (MIT
Vax 4.2 Unix Scheme in C).

The program is available for trade or (in a few months) for a nominal
fee from our group at MIT. Planned expansions include the ability to
change thermodynamic state variables, a thermo methods database
(rule-driven thermo), and a clock for kinetic problems.

Please send all answers and questions to:

meltsner@athena.MIT.EDU

or Ken Meltsner
MIT Room 13-5142
Cambridge, MA 02139 617-253-3139


Ken

------------------------------

From: Dave Taylor <hpcnou!dat%hplabs.csnet@CSNET-RELAY>
Date: Wed, 23 Oct 85 11:07:38 MDT
Subject: Tutorial Dialog Systems wanted


I'm working on a paper that is a summary of current systems available
for tutorial-style dialogs and would like to get information on languages
that I haven't come across yet.

Before I list them, though, I've already ruled out most of the
AI natural language pre-processors since they are all too formalized
and one of the goals of the paper is to show what the best language
would be for non-programmers to use. The application is very simple
CAI stuff - dialogs like:

Computer: What is your name?

Person: joe

Computer: hello joe.

Computer: please tell me your name again

Person: jack

Computer: last time you said 'joe'. I don't understand!


(quite rudimentary).

Anyway, the languages that I've looked at so far are:

Pilot, Basic, Lisp, Prolog, Snobol, Bourne Shell (suprisingly lucid)
Pascal, Ada, Icon, Mesa and a language from IBM called "Rexx".

and, still queued to be looked at:

Logo, the Berkeley Learn system (with the help from someone in AI-ED!)


Thanks a lot for any help you can offer!

-- Dave Taylor

hpcnof!dat@HPLABS.CSNET

or ihnp4!hpfcla!d_taylor

------------------------------

From: Mark Richer <RICHER%sumex-aim.arpa@CSNET-RELAY>
Date: Fri 25 Oct 85 13:40:35-PDT
Subject: Re: Request for Information (CAL in Medicine)


Here's some information on the GUIDON project, including references:

Mark Richer, Oct. 25th, 1985

The GUIDON project is an applied AI research project at the Knowledge
Systems Laboratory, Computer Science Department, Stanford University.
This project is investigating strategies for teaching diagnostic
reasoning (specifically, medical diagnosis) using computers and
knowledge-based systems technology. Part of the effort in this project
has been to extend the capabilities of KB systems technology for the
purpose of explanation and instruction. NEOMYCIN, a knowledge-based
diagnostic consultation system, has been implemented and is the
foundation for a new series of instructional programs, collectively
called GUIDON-2. These programs are substantially different in design
than the original GUIDON tutoring system that worked in conjunction
with EMYCIN (e.g., MYCIN) systems. The director of the project is
William J. Clancey, Ph.D., Senior Research Associate, Computer Science
Department, Stanford University. There are about a dozen people
associated with this project at present including a physician. Below
is a list of references that might be of interest to people doing work
in computer-based instruction. Papers that are listed as HPP or KSL
technical reports are available by writing or calling Knowledge Systems
Laboratory, 701 Welch Road, Bldg. C, Palo Alto, CA 94304, (415)
497-3444. STAN-CS papers (I think) are available through the Computer
Science Department, Stanford University, Stanford CA 94305.

WARNING: Do not send requests for papers to me; I'm afraid I will get
swamped. Try to find the reference yourself if it was published,
otherwise request it directly by calling or mailing to KSL or Stanford
CS. (KSL is part of the CS Dept, but we are housed in a separate
building at present and we maintain our series of technical reports.)
Thank you.

References: [these are not in any particular order]

Clancey, W.J. (1979) Transfer of rule-based expertise through a
tutorial dialogue. Computer Science Doctoral Dissertation, Stanford
University, NOT Available as a tech report. Revised version, MIT
Press, in preparation.

Clancey, W.J. (1979) Tutoring rules for guiding a case method
dialogue. Int J of Man-Machine Studies, 11, 25-49. Also in Intelligent
Tutoring Systems, eds. Sleeman and Brown, Academic Press, London,
1982.

Clancey, W.J. (1982) Overview of GUIDON.
Journal of Computer-Based Instruction,
Summer 1983, Volume 10, Numbers 1 & 2, pages 8-15.
Also in The Handbook of Artificial Intelligence, Volume 2,
eds. Barr and Feigenbaum, Kaufmann, Los Altos.
Also STAN-CS-93-997, HPP-83-42.

Richer, M. and Clancey, W. J. (1985)
GUIDON-WATCH: A graphic interface for browsing and viewing a
knowledge-based system. To appear in IEEE Computer Graphics
and Applications, November 1985, Also KSL 85-20.

Clancey, W.J., Bennett, J., and Cohen, P. (1979)
Applications-oriented AI Research: Education.
In The Handbook of Artificial Intelligence, Chapter IX,
Volume 2, eds. Barr and Feigenbaum, Kaufmann, Los Altos.
Also STAN-CS-79-749, HPP-79-17.

Clancey, W.J., Shortliffe, E.H., and Buchanan, B.G. (1979)
Intelligent computer-aided instruction for medical diagnosis.
In Readings in Medical Artificial Intelligence: The First
Decade, eds. W.J. Clancey and E.H. Shortliffe, Addison-Wesley, 1984.
Also Proceedings of the Third Annual Symposium on
Computer Applications in Medical Care,
Silver Spring, Maryland, October 1979, pps. 175-183.
Also HPP 80-10.

Clancey, W.J. and Letsinger, R. (1981)
NEOMYCIN: Reconfiguring a rule-based expert system for
application to teaching. In Readings in Medical Artificial
Intelligence: The First Decade,
eds. W.J. Clancey and E.H. Shortliffe, Addison-Wesley, 1984.
Proceedings of Seventh IJCAI, 1981, pps. 829-826.
Also STAN-CS-82-908, HPP 81-2.

Clancey, W.J. (1981)
Methodology for Building an Intelligent Tutoring System.
In Method and Tactics in Cognitive Science,
eds. Kintsch, Miller, and Polson, Lawrence Erlbaum Associates,
Hillsdale, New Jersey, 1984.
Also STAN-CS-81-894, HPP 81-18.

Clancey, W.J. (1984)
Acquiring, representing, and evaluating a competence model of
diagnosis.
In Contributions to the Nature of Expertise, eds. Chi,
Glaser, and Farr, in preparation.
Also HPP-84-2.

Clancey, W.J. (1979)
Dialogue Management for Rule-based Tutorials.
Proceedings of Sixth IJCAI, 1979, pps. 155-161.

London, B. & Clancey W. J. (1982)
Plan recognition strategies in student modeling: Prediction
and description.
Proceedings of AAAI-82, pps. 335-338.
Also STAN-CS-82-909, HPP 82-7.

Clancey, W.J. (1983)
Communication, Simulation, and Intelligent Agents:
Implications of Personal Intelligent Machines for Medical
Education.
Proceedings of AAMSI-83, pps. 556-560.
Also HPP-83-3.

Many people have influenced our thinking, but in particular the
following paper may be helpful to understand our current thinking with
regard to computer-based learning:

@Inproceedings[BROWN83,
key="Brown"
,Author="Brown, J.S."
,title="Process versus product--a perspective on tools for
communal and informal electronic learning"
,booktitle="Education in the Electronic Age"
,note="Proceedings of a conference sponsored by the
Educational Broadcasting Corporation, WNET/Thirteen Learning
Lab, NY, pp. 41-58 ."
,month=July
,year=1983]

The work described in this paper has its home at the XEROX
Palo Alto Reasearch Center (PARC).

------------------------------

From: Stuart Crawford <GA.SLC%su-forsythe.arpa@CSNET-RELAY>
Date: Sat, 26 Oct 85 15:16:37 PDT

I am interested in obtaining pointers to recent references regarding
the known pros and cons of using pure statistical approaches to
medical diagnosis (such as the use of classification and regression
trees) as opposed to expert systems approaches. In particular, I
am interested in any literature discussing the possible use of
the combined use of such approaches. For example, using
classification trees to help with the fine tuning of production
rules, or using classification rules to augment current knowledge
bases. I know much more about the statistical approaches than the
ai approaches, but it seems that some interdisciplanary technique
might be fruitful.

Stuart Crawford

------------------------------

From: Peter de Jong <DEJONG%MIT-OZ%mit-mc.arpa@CSNET-RELAY>
Date: Fri, 25 Oct 1985 12:44 EDT
Subject: Cognitive Science Calendar
Reply-to: Cog-Sci-Request%MIT-OZ%mit-mc.arpa@CSNET-RELAY

-------------------------------------------------
Monday 28, October 2:15pm (refreshments 2:00pm) Room: NE43-512A


TWO APPLICATIONS OF THE IFO DATABASE MODEL

RICHARD HULL
Computer Science Department
University of Southern California
Los Angeles, CA 90089-0782

ABSTRACT

The IFO database model, recently introduced by the speaker, provides
a clean, modular synthesis of many of the basic ideas found in the
object-based, semantic database models. The IFO model supports the
fundamental constructs of ISA relationships, functional relationships,
and object construction; and ensures good schema design through a
small set of restrictions on how these constructs are combined.

The talk will focus on two applications of the IFO model. The first
is SNAP, an interactive, graphics-based system currently under develop-
ment, which provides natural mechanisms for schema design, schema
browsing, and for querying the database. SNAP has several advantages
over other interactive interfaces for schema access because of unique
features of the IFO model.

The second application focusses on update propagation in semantic
database models. This is of particular interest because of the
complex interconnections between data which can be represented by
schemas from these models. A mathematical result characterizing
the propagation of atomic update requests to IFO schemas will be
presented


HOST: Professor Rishiyur Nikhil

---------------------------------------------------------------

Wednesday 30, October 4:00pm Room: 405 Robinson Hall
Northeastern University
360 Huntington Ave.
Boston MA


Northeastern University
College of Computer Science Colloquium


Brittleness, Tunnel Vision, Machine Learning and
Knowledge Representation

Prof. Steve Gallant
Northeastern University


A system is brittle if it fails when presented with slight deviations from
expected input. This is a major problem with knowledge representation schemes
and particularly with expert systems which use them.

This talk defines the notion of Tunnel Vision and shows it to be a major
cause of brittleness. As a consequence it will be claimed that commonly
used schemes for machine learning and knowledge representation are pre-
disposed toward brittle behavior. These include decision trees, frames,
and disjunctive normal form expressions.

Some systems which are free from tunnel vision will be described.


INFO: Carole D Hafner <HAFNER%northeastern.csnet@CSNET-RELAY.ARPA>

------------------------------

Wednesday 30, October 4:00pm Room: 20E-207 (Philosopy Lounge)

A Government-Binding Parser

Steven Abney and Jennifer Cole
Department of Linguistics, MIT

We report on work in progress in the development of a model
of natural language parsing which incorporates the Government-Binding
theory of grammar. The computational model we adopt is the Actor
theory of distributed computation, which is being developed by the
Apiary project of the MIT AI Laboratory. We contrast parsing in a
principle-based framework such as that of Government-Binding theory,
and parsing in a rule-based framework such as Context Free Grammar.
We discuss how the Actor model provides a natural way of approaching
the parsing problem in a principle-based theory, and present our model
in moderate detail. Issues such as the relation between parser and
grammar are also addressed.

------------------------------

From: Peter de Jong <DEJONG%MIT-OZ%mit-mc.arpa@CSNET-RELAY>
Date: Wed, 30 Oct 1985 09:55 EST
Subject: Cognitive Science Calendar


Friday 1, November 10:30am Room: BBN Labs, 10 Moulton Street,
3rd floor large conference room

BBN Artificial Intelligence Seminar

"A Knowledge-Based Approach to Language Production"

Paul Jacobs

The development of natural language interfaces to Artificial
intelligence systems is dependent on the representation of knowledge.
A major impediment to building such systems has been the difficulty in
adding sufficient linguistic and conceptual knowledge to extend and
adapt their capabilities. This difficulty has been apparent in systems
which perform the task of language production, i. e. the generation of
natural language output to satisfy the communicative requirements of a
system.

The problem of extending and adapting linguistic capabilities is
rooted in the problem of integrating abstract and specialized
knowledge and applying this knowledge to the language processing task.
Three aspects of a knowledge representation system are highlighted by
this problem: hierarchy, or the ability to represent relationships
between abstract and specific knowledge structures; explicit
referential knowledge, or knowledge about relationships among concepts
used in referring to concepts; and informity, the use of a common
framework for linguistic and co ceptual knowledge. The knowledge
based approach to language production addresses the language
generation task from within the broader context of the representation
and application of conceptual and linguistic knowledge.

This knowledge based approach has led to the design and
implementation of a knowledge representation framework, called Ace,
geared towards facilitating the interaction of linguistic and
conceptual knowledge in language processing. Ace is a uniform,
hierarchical representation system, which facilitates the use of
abstractions in the encoding of specialized knowledge and the
representation of the referential and metaphorical relationships among
concepts. A general purpose natural language generator, KING
(Knowledge INtensive Generator), has been implemented to apply
knowledge in the Ace form. The generator is designed for knowledge
intensivity and incrementality, to exploit the power of the Ace
knowledge in generation. The generator works by applying structured
associations, or mappings, from conceptual to linguistic structures,
and combining these structures into grammatical utterances. This has
proven to be a simple but powerful mechanism which is relatively easy
to adapt and extend.

------------------------------

From: Peter de Jong <DEJONG%MIT-OZ%mit-mc.arpa@CSNET-RELAY>
Date: Fri, 1 Nov 1985 11:11 EST
Subject: Cognitive Science Calendar

Monday 4, November 4:00pm Room: NE43- 8th floor playroom

SEMINAR IN VISUAL INFORMATION PROCESSING

"Edge Detection: Two New Approaches"

Michael Gennert


The detection of edges in images is a 30 year old problem that has still
not been completely solved. There have been many attempts made to
solve it, ranging from the ad hoc to methods based on information theory.
In this talk I will review the theory of edge detection, mention some
of the better methods of edge detection, and propose two new classes
of edge detector.

For simplicity I will consider only the detection of step edges.
Step edges can be detected by convolving the image with a smoothing
filter (such as a Gaussian) and identifying points in this
image where the gradient is large. This can be done either by
finding points where the first derivative is high or the second
derivative is zero. These smoothing and differentiating operations
can be combined into a single convolution operation. This is the
general approach taken by Marr and Hildreth, Modestino and Fries,
Canny, and others.

Canny developed his edge detector by suggesting several performance
criteria, and solving the resulting difficult optimization problem.
Unfortunately, he carried out his analysis in only one dimension,
whereas the problem is inherently two-dimensional. I will discuss
extending Canny's analysis to two dimensions. The two-dimensional
extension requires solving fourth-order nonlinear partial differential
equations, showing that Canny was right not to consider doing it.

Another approach results from looking at the detection of half-edges
rather than edges. This leads to a generalization of the derivative of
a Gaussian operator. This operator is capable of detecting changes in
image intensity even when the usual assumptions of image analyticity
do not apply, such as at corners and vertices. The main drawbacks are
the increased computational requirements of the operator, and its lower SNR.

------------------------------

From: Peter de Jong <DEJONG%MIT-OZ%mit-mc.arpa@CSNET-RELAY>
Date: Fri, 1 Nov 1985 09:15 EST
Subject: Cognitive Science Calendar

Sunday 3, November 6:00 pm Room: Dunster House Small Dining Room
Harvard
5:30 dinner [can be purchused]
6:00 talk.

HARVARD-RADCLIFFE COGNITIVE SCIENCES SOCIETY

"Why is short term memory so accurate"

Professor Mary Potter
Psychology Department
MIT


info: ETZI@OZ

------------------------------

Monday 4, November 10:30am Room: BBN Labs, 10 Moulton Street,
3rd floor large conference room

BBN Laboratories
Science Development Program
AI Seminars

Generic Tasks in Knowledge-Based Reasoning: Characterizing
and Designing Expert Systems at the "Right" Level of Abstraction

Prof. B. Chandrasekaran
Laboratory for Artificial Intelligence Research
Department of Computer and Information Science
The Ohio State University


We outline the elements of a framework for expert system design that
we have been developing in our research group over the last several
years. This framework is based on the claim that complex knowledge-based
reasoning tasks can often be decomposed into a number of generic tasks
each with associated types of knowledge and family of control regimes.
At different stages in reasoning, the system will typically engage in
one of the tasks, depending upon the knowledge available and the state
of problem solving. The advantages of this point of view are manifold:
(i) Since typically the generic tasks are at a much higher level of
abstraction than those associated with first generation expert system
languages, knowledge can be represented directly at the level
appropriate to the information processing task. (ii) Since each of the
generic tasks has an appropriate control regime, problem solving
behavior may be more perspicuously encoded. (iii) Because of a richer
generic vocabulary in terms of which knowledge and control are
represented, explanation of problem solving behavior is also more
perspicuous. We briefly describe six generic tasks that we have found
very useful in our work on knowledge-based reasoning: classification,
state abstraction, knowledge-directed retrieval, object synthesis by
plan selection and refinement, hypothesis matching, and assembly of
compound hypotheses for abduction.

------------------------------

Tuesday 5, November 10:30am Room: BBN Labs, 10 Moulton Street,
2nd floor large conference room

BBN Laboratories
Science Development Program
AI Seminars

The Next Generation of Language Lab Materials: Developing
Prototypes at MIT

Prof. Janet Murray
Dept. of Humanities, MIT


MIT's Athena Language Learning Project is a five-year enterprise
whose aim is to develop prototypes of the next generation of
language-lab materials, particularly conversation-based exercises using
artificial intelligence to analyse and respond to typed input. The
exercises are based upon two systematized methods of instruction that
are specialties at MIT: discourse theory and simulations. The project
is also seeking to incorporate two associated technologies: digital
audio and interactive video. The digital audio sub-project is
developing exercises for intonation practice, initially focusing on
Japanese speakers learning English. The interactive video component of
the project consists of preparation of a demonstration disc which
features a variety of interactive video approaches including enhancement
of the text-based simulations and presentation of dense conversational
material in natural settings. The project is being developed on the
Athena system at MIT, and is based upon the model of a near-future
language lab/classroom environment that will include stations capable of
providing interactive video, digital audio, and AI-based exercises.

------------------------------

Friday 8, November 10:30am Room: BBN Labs, 10 Moulton Street,
3rd floor large conference room

BBN Laboratories
Science Development Program
AI Seminars

Correcting Object Related Misconceptions

Prof. Kathleen F. McCoy
University of Delaware


Analysis of a corpus of naturally occurring data shows that users
conversing with a database or expert system are likely to reveal
misconceptions about the objects modelled by the system. Further
analysis reveals that the sort of responses given when such
misconceptions are encountered depends greatly on the discourse context.

This work develops a context-sensitive method for automatically
generating responses to object-related misconceptions with the goal of
incorporating a correction module in the front-end of a database or
expert system. The method is demonstrated through the ROMPER system
(Responding to Object-related Misconceptions using PERspective) which is
able to generate responses to two classes of object-related
misconceptions: misclassifications and misattributions.

The transcript analysis reveals a number of specific strategies used
by human experts to correct misconceptions, where each different
strategy refutes a different kind of support for the misconception. In
this work each strategy is paired with a structural specification of the
kind of support it refutes. ROMPER uses this specification, and a model
of the user, to determine which kind of support is most likely. The
corresponding response strategy is then instantiated.

The above process is made context sensitive by a proposed addition to
standard knowledge-representation systems termed "object perspective."
Object perspective is introduced as a method for augmenting a standard
knowledge-representation system to reflect the highlighting affects of
previous discourse. It is shown how this resulting highlighting can be
used to account for the context-sensitive requirements of the correction
process.

------------------------------

END OF IRList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT