Copy Link
Add to Bookmark
Report

Machine Learning List Vol. 3 No. 23

eZine's profile picture
Published in 
Machine Learning List
 · 10 months ago

 
Machine Learning List: Vol. 3 No. 23
Friday, Dec. 13, 1991

Contents:
Knowledge Discovery In Databases
Workshop Announcements at ML-92

The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>

------------------------------
Date: Thu, 12 Dec 91 11:39:46 EST
From: Gregory Piatetsky-Shapiro <gps0%eureka@gte.COM>
Subject: Book announcement


K N O W L E D G E D I S C O V E R Y I N D A T A B A S E S

Edited by Gregory Piatetsky-Shapiro and William J. Frawley
AAAI Press / The MIT Press
525 pages. paperback.

Addressed to computer scientists, MIS professionals, and those interested
in machine learning and databases, this collection is the first to bring
together leading edge research in the exciting field of discovery in
databases. Its thirty chapters span data-driven, knowledge-based, and
integrated approaches, dealing with the discovery of quantitative and
qualitative laws, data summarization, methodology, and application issues.

From the foreword by J.R. Quinlan:
"Knowledge Discovery in Databases is a pretty ambitious title
... but in the best sense of capturing the essence of
something that is both achievable and worth attaining.
The 1990s should see the widespread exploitation of knowledge
discovery as an aid to assembling knowledge bases."

Contents ------------------------

Foreword, J. R. Quinlan
1. Knowledge Discovery in Databases - An Overview.
W. Frawley, G. Piatetsky-Shapiro, C. Matheus.

-------- Part I. Discovery of Quantitative Laws.
2. Interactive Mining of Regularities in Databases. J. Zytkow, J. Baker
3. Discovering Functional Relationships from Observational Data.
Y.-H. Wu, S. Wang
4. Minimal-length Encoding and Inductive Inference. E. Pednault
5. On Evaluation of Domain-Independent Scientific Function-Finding Systems.
C. Schaffer

-------- Part II Discovery of Qualitative Laws.
post6. A Statistical Technique for Extracting Classificatory Knowledge
from Databases. K.C.C. Chan, A.K.C. Wong
7. Information Discovery through Hierarchical Maximum Entropy
Discretization and Synthesis. D.K.Y. Chiu, A.K.C. Wong, B. Cheung
8. Learning Useful Rules From Inconclusive Data.
R. Uthurusamy, U. Fayyad, S. Spangler
9. Rule Induction using Information Theory. P. Smyth, R. Goodman
10. Incremental Discovery of Rules and Structure by Hierarchical
and Parallel Clustering. J. Hong, C. Mao
11. The Discovery, Analysis, and Representation of Data Dependencies
in Databases. W. Ziarko

--------- Part III - Using Knowledge in Discovery.
12. Attribute-Oriented Induction in Relational Databases.
Y. Cai, N. Cercone, J. Han
13. Discovery, Analysis, and Presentation of Strong Rules. G. Piatetsky-Shapiro
14. Integration of Heuristic and Bayesian Approaches In a Pattern
Classification System. Q. Wu, P. Suetens, A. Oosterlink
15. Using Functions to Encode Domain and Contextual Knowledge in
Statistical Induction. W. Frawley
16. Integrated Learning in a Real Domain. F. Bergadano, A. Giordana,
L. Saitta, F. Brancadori, D. De Marchi
17. Induction of Decision Trees from Complex Structured Data.
M. Manago, Y. Kodratoff

---------- Part IV - Data Summarization
18. Summary Data Estimation using Decision trees. M.C. Chen, L. McNamee
19. A Support System for Interpreting Statistical Data. P. Hoschka, W. Kloesgen
20. On Linguistic Summaries of Data. R. Yager

---------- Part V - Domain-Specific Discovery Methods
21. Extracting Reaction Information from Chemical Databases.
C.-S. Ai, P. Blower, R. Ledwith
22. Automated Knowledge Generation From a CAD Database.
A. Gonzalez, H. Myler, M. Towhidnejad, F. McKenzie, R. Kladke
23. Justification-Based Refinement of Expert Knowledge.
J. Schlimmer, T. Mitchell, J. McDermott
24. Rule Discovery for Query Optimization. M. Siegel, E. Sciore, S. Salveter

---------- Part VI - Integrated and Multi-Paradigm Systems
25. Unsupervised Discovery in an Operations Control Setting.
B. Silverman, M. Hieb, T. Mezher
26. Mining for Knowledge in Databases: Goals and General Description of the
INLEN System. K. Kaufman, R. Michalski, L. Kerschberg

---------- Part VII - Methodology and Application Issues
27. Automating the Discovery of Causal Relationships in a Medical Records
Database: The POSCH AI project. J. Long, E. Irani, J. Slagle
28. Discovery of Medical Diagnostic Information: An Overview of Methods and
Results. M. McLeish, P. Yao, M. Garg, T. Stirtzinger
29. The Trade-Off Between Knowledge And Data in Knowledge Acquisition.
B. Gaines
30. Knowledge Discovery as a Threat to Database Security. D. O'Leary
----------------------------------
To order contact
MIT Press
55 Hayward Street
Cambridge, MA 02142, USA
or call toll-free (from USA) 1-800-356-0343

ISBN number 0-262-66070-9, Book code PIAKP
Price: $37.50 + 2.75 postage.
International postage $4.75 - surface, $8.50 airmail


------------------------------
Date: Fri, 13 Dec 91 14:22:07 EST
From: gordon@aic.nrl.navy.MIL
Subject: workshop announcements




***************************************************************************


CALL FOR PAPERS
Informal Workshop on ``Biases in Inductive Learning"
To be held after ML-92

Saturday, July 4, 1992 Aberdeen, Scotland


All aspects of an inductive learning system can bias the learn-
ing process. Researchers to date have studied various biases in
inductive learning such as algorithms, representations, background
knowledge, and instance orders. The focus of this workshop is not
to examine these biases in isolation. Instead, this workshop will
examine how these biases influence each other and how they influence
learning performance. For example, how can active selection of
instances in concept learning influence PAC convergence? How might
a domain theory affect an inductive learning algorithm? How does
the choice of representational bias in a learner influence its algo-
rithmic bias and vice versa?

The purpose of this workshop is to draw researchers from
diverse areas to discuss the issue of biases in inductive learning.
The workshop topic is a unifying theme for researchers working in
the areas of reformulation, constructive induction, inverse resolu-
tion, PAC learning, EBL-SBL learning, and other areas. This
workshop does not encourage papers describing system comparisons.
Instead, the workshop encourages papers on the following topics:

- Empirical and analytical studies comparing different biases in
inductive learning and their quantitative and qualitative influ-
ence on each other or on learning performance

- Studies of methods for dynamically adjusting biases, with a
focus on the impact of these adjustments on other biases and on
learning performance

- Analyses of why certain biases are more suitable for particular
applications of inductive learning

- Issues that arise when integrating new biases into an existing
inductive learning system

- Theory of inductive bias

Please send 4 hard copies of a paper (10-15 double-spaced pages,
ML-92 format) or (if you do not wish to present a paper) a descrip-
tion of your current research to:

Diana Gordon
Naval Research Laboratory, Code 5510
4555 Overlook Ave. S.W.
Washington, D.C. 20375-5000 USA

Electronic and FAX submissions will not be accepted. If you have
any questions about the workshop, please send email to Diana Gordon
at gordon@aic.nrl.navy.mil or call 202-767-2686.

Important Dates:

March 12 - Papers and research descriptions due
May 1 - Acceptance notification
June 1 - Final version of papers due

Program Committee:

Diana Gordon, Naval Research Laboratory
Dennis Kibler, University of California at Irvine
Larry Rendell, University of Illinois
Jude Shavlik, University of Wisconsin
William Spears, Naval Research Laboratory
Devika Subramanian, Cornell University
Paul Vitanyi, CWI and University of Amsterdam


************************************************************************

CALL FOR PARTICIPATION

Informal Workshop on ``Knowledge Compilation and Speedup Learning''
To be held after ML-92
Saturday, July 4, 1992 Aberdeen, Scotland

Description of Workshop


``Knowledge Compilation'' is the problem of converting a declarative
specification or a domain theory to an efficient executable program.
``Speedup Learning'' is the problem of improving, or speeding up, a slow
problem solver. These two tasks are closely related since a declarative
specification or a domain theory can be viewed as a slow problem solver.

The approaches to knowledge compilation or speedup learning include
explanation-based learning, empirical learning, and partial evaluation,
among others. This workshop aims to bring together researchers in all these
areas with the goal of achieving a better understanding of knowledge compilation
and speedup learning. This workshop is also a sequel to the ``Knowledge
Compilation'' workshop which was organized by Jim Bennett, Tom Dietterich, and
Jack Mostow in 1986.

There have been a number of results -- both positive and negative -- in
speedup learning in the past few years. For example, while there were some
positive results in Explanation-Based Learning (EBL) (e.g., Minton), the
``utility problem,'' or the exorbitant computational cost of using the learned
knowledge, proved to be a significant obstacle to further progress (e.g.,
Minton, Tambe and Rosenbloom). On the other hand, there have also emerged some
interesting relationships between partial evaluation and EBL (e.g.,
van Harmelen and Bundy), and between EBL and empirical learning (e.g., Yoo and
Fisher). This appears to be a good time to consolidate what we know from all
these areas and define the next round of research problems and issues.

Specifically, we are interested in questions like the following:

-- what does it mean to compile knowledge or speedup a problem solver?
-- what are the factors that influence the speedup?
-- what are the conditions under which successful speedup can occur?
-- how do examples help in obtaining speedup?
-- what are some of the methods that can achieve compilation/speedup?
-- are different speedup/compilation methods distinguishable?
-- what are the conditions of success for various methods?
-- how does the learning protocol effect the speedup?
-- how does speedup learning relate to concept learning?
-- how does the language of knowledge representation influence speedup learning?
-- how to scale up speedup learning methods to real world tasks?
-- what are some of the promising research directions in this area?

This is not an exhaustive list but a sample of questions that we
are interested in. We specifically want to emphasize informal discussions
and exchange of ideas, rather than polished research results. In this
spirit, we welcome position papers on the research directions as well as
rational reconstructions of previous work. Especially encouraged are papers
that elucidate the relationships between different approaches to speedup
learning/knowledge compilation (e.g., explanation-based learning and
empirical learning).

If you wish to do a presentation at the workshop, please email an
extended abstract (of a maximum of 1000 words) to tadepalli@cs.orst.edu by
March 20, 1992. The file should be in ascii or postscript only. If email
is impossible, then please send 4 copies of the abstract to:

Prasad Tadepalli
Department of Computer Science
303 Dearborn Hall
Oregon State University
Corvallis, OR 97331-3202


If you do not wish to present a paper but want to attend the workshop,
please send a one page summary of your relevant research and publications
to the same address by the above date.

Important Dates:

March 20 - Abstracts and research descriptions due
May 1 - Acceptance notification
June 1 - Final version of papers due

Program Committee:

Oren Etzioni, University of Washington
Doug Fisher, Vanderbilt University
Nick Flann, Utah State University
Steve Minton, NASA Ames Research Center
Armand Prieditis, University of California
Devika Subramanian, Cornell University
Prasad Tadepalli, Oregon State University
Frank van Harmelen, University of Amsterdam


************************************************************************


CALL FOR PAPERS

Informal Workshop on ``Computational Architectures for
Supporting Machine Learning and Knowledge Acquisition''

To be held after ML-92
Saturday, July 4, 1992 Aberdeen, Scotland



The synthesis and refinement of a knowledge base has been the focus
of research in both the knowledge acquisition and machine learning
communities. Both groups, working independently, have produced with
varying degrees of success, a number of systems for producing or
refining a knowledge base. Knowledge acquisition (and apprentice)
systems have considered these problems by focusing on how a human
expert can be used to identify knowledge or to identify errors and
suggest modifications. Machine learning uses examples of concepts
and/or a domain theory to induce or modify a knowledge base. Although
both communities rely on very different sources of knowledge, both
face the key issues of defining what needs to be learned and
techniques that are useful for learning this knowledge.

Identifying what is to be learned depends in part on the
description of the problem solving goals for which the knowledge base
might be used. There are many ways to describe these goals, and the
proliferation of many different kinds of knowledge-based system
descriptions (e.g. rule bases, theorem provers, problem spaces, task
specific architectures) is evidence of this.

The difference between these descriptions comes from the models of
the problem solving process the knowledge base is to support. For
example, many knowledge-based systems for problems such as diagnosis
are described as collections of rules. Another way to describe the
same system is as a collection of tasks that decompose into subtasks.
Regardless of the type of description of a knowledge base that is
used, each one defines a particular system architecture whose
organization has great impact on what is to be learned. The
architecture essentially defines the knowledge that is to be learned
or modified.

Just as the architecture defines the types of knowledge that are
needed and affects the efficiency and effectiveness of the knowledge
base's performance, the architecture also has great impact upon the
efficacy of the learning process. Usually the number of concepts that
can be learned or modifications that can be made is very large, and
for a learning system to operate efficiently, it must be able to
constrain the space of alternatives to those most likely to be useful.
This is the role of bias in learning systems -- to guide the learning
toward a subset of possible concepts, which improve performance. The
system architecture is an excellent guide to the selection of bias.
Unfortunately, there has been little interaction between the knowledge
acquisition, machine learning, and problem solving communities. Most
of the discussion in these areas has also focused on techniques for
synthesizing or modifying knowledge instead of considering how the
architecture constrains what is to be learned.

In this workshop, we will investigate the relationship between
architectures and constraints on the learning process by considering
how a number of architectures support the identification of what needs
to be learned. In particular, we invite researchers who have
implemented knowledge acquisition and machine learning systems based
upon such architectures to present their results, so that in a
comparative, cooperative session the entire community can profit from
their successes and learn from their limitations. The workshop will
consist of a series of presentations and panel discussions by
researchers that describe the impact of the knowledge base's
architecture on their learning experiments. The discussion will focus
on the types of architectures and their role in defining the learning
space. The goal of the workshop is to try to identify characteristics
of an architecture that provide power for learning. By having an open
forum where examples of how each architecture affected the learning
process, the workshop will provide a useful way to investigate this
issue.

In this workshop, we invite submissions that discuss how an
architecture they used defined what is to be learned. Points a
submission might address include:

- a description of the architecture of the knowledge base (e.g. rule base,
predicates, hierarchy of tasks, problem spaces, etc.) using
a vocabulary of domain independent terms that are used by
the learning system

- a description of the kinds of knowledge the learning system gathered in
knowledge base synthesis or the kinds of errors that are
identified in knowledge base refinement, more specifically,

-- in knowledge base synthesis:

--- how did the architecture focus the knowledge gathering
process (what is the space of knowledge to be learned
and how does the architecture limit this space),
--- what is the vocabulary of the questions that are
asked of the human expert defined by the architecture,
--- how does the architecture affect the user interaction
with the learning system
--- how directly does the architecture define what is to
be learned (how much interpretation does the human
expert or system need to decide what is to be
learned)
--- what kinds of examples are used and how are they described

-- in knowledge base refinement:
--- what are the kinds of errors that are defined by the
architecture
--- how many possible errors are typically considered in a case
--- what kinds of knowledge are needed to determine which
error(s) occurred
--- what kinds of knowledge are required to modify the
knowledge base

- a discussion of the results of their learning system
-- what leverage did the architecture provide on their
learning problem
-- what limitations or costs did the architecture cause in learning


Submissions should be 5-7 pages in length in the format for the ML
conference, and they should be accompanied by a 1-2 page description
of your current research. People wishing to attend the workshop but
not present a paper *need only* send a research description. Please
send 5 copies of your submisssion and/or research description to:

Michael Weintraub
GTE Laboratories
40 Sylvan Road
Waltham, MA 02254
USA

Important Dates:

March 15 - Papers and research descriptions due
May 1 - Acceptance notification
June 1 - Final version of papers due

Program Committee:

Dean Allemang, Istituto Dalle Molle di Studi sull'Intelligenza
Artificiale, Lugano (dean@idsia.ch)
Ray Bareiss, Northwestern University (bareiss@ils.nwu.edu)
Susan Craw, Robert Gordon Institute of Technology, Aberdeen
(smc@csd.abdn.ac.uk)
Henrik Eriksson, Linkoping University (eriksson@sumex-aim.stanford.edu)
Ashok Goel, Georgia Institute of Technology (goel@cc.gatech.edu)
Tom Gruber, Stanford University (gruber@sumex-aim.stanford.edu)
Mark Musen, Stanford University (musen@sumex-aim.stanford.edu)
Maarten van Someren, University of Amsterdam, (maarten@swi.psy.uva.nl)
Michael Weintraub (Chair), GTE Laboratories (maw2@gte.com)


*********************************************************************


CALL FOR PAPERS
Informal Workshop on ``Integrated Learning in Real-world Domains''
To be held after ML-92
Saturday, July 4, 1992 Aberdeen, Scotland


Experience has shown that many learning paradigms fail to scale up to
real-world problems. One response to these failures has been to
construct systems which use multiple learning paradigms. Thus, if one
paradigm succeeds at the failure points of others, the effectiveness
of the overall system will be enhanced (i.e., coverage is gained).
Consequently, integrated techniques have become widespread over the
last few years. Such systems can be viewed on a spectrum ranging from
"tool-box" approaches to tightly-integrated systems.

In a tool-box approach, a number of different learning paradigms are
packaged together. The user, faced with a particular problem, decides
which paradigm to use in this instance, or how to combine paradigms to
solve the problem jointly. Examples of "tool-box" systems include
Michalski's MTL, EMERALD, and INLEN. In tightly-integrated systems,
several machine learning paradigms are combined to produce a new
technique. Examples of tightly-integrated systems include Danyluk's
Gemini, Pazzani's OCCAM, Cohen's A-EBL and GRENDEL, Pazzani, Brunk \&
Silverstien's FOCL, Shavlik \& Towell's ANN-EBL, Mooney \& Ourston's
EITHER, and Bergadano, Giordana \& Saitta's ML-SMART. Between these
two ends of the spectrum are loosely-coupled systems where the
different learning algorithms all attempt more or less independently
to solve either all or part of the problem, perhaps cooperating
somewhat. GTE's ILS is an example of this type of system.

This workshop is intended to bring together researchers combining
empirical, knowledge-intensive, neural net (and other connectionist
approaches), genetic, case-based, statistical (classical, Bayesian,
and non-parametric approaches) or other learning techniques to solve
real-world problems. Special consideration will be given to systems
which combine techniques from a wide variety of fields.

This workshop will encourage papers on the following topics:

- a case study detailing useful information learned from the
application of an integrated technique to a real-world domain,

- results on novel combinations of paradigms, or

- comparative analysis (either empirical or theoretical) of relative
performance of individual learning paradigms and integrated techniques


Additionally, each paper should answer the following questions:

- What is "real" about your domain?

- Do you have large quantities of data? is it reliable?

- Do you have a source of expert domain knowledge? What is it?
Is it reliable?

- Is it important that anyone ``understand'' the results of your
learning system?

- Is it sufficient to show improvement of a performance system
empirically?

- Need the learning system justify itself?

- Need it work cooperatively with an expert?

- Why is a single learning paradigm inadequate?

- When working in a real domain, does the combination of paradigms
applied become so tailored to the problem that it does not apply more
generally?


Please send 4 copies of a paper (max. 10 pages, ML-92 format) or (if
you do not wish to present a paper) a description of your current
research to the workshop chairperson:

Patricia Riddle
Boeing Computer Services
P.O. Box 24346, MS 7L-64
Seattle, Washington USA 98124-0346

Important Dates:
March 15 - Papers and research descriptions due
May 15 - Acceptance notification
May 29 - Final version of papers due

Program Committee:
Bernard Silver, GTE Laboratories
Andrea Danyluk, NYNEX Science and Technology
Michael Pazzani, UC Irvine
Steve Chien, Jet Propulsion Laboratory
Lorenza Saitta, Universita' di Torino
Yves Kodratoff



*************************************************************************

CALL FOR PAPERS
Workshop on ``Machine Discovery''
To be held after ML-92
Saturday, July 4, 1992 Aberdeen, Scotland


The number of researchers working on machine discovery (scientific
discovery, knowledge discovery in databases, automation of data
analysis, and other areas) is currently greater than one hundred and
growing. A substantial number of new projects are being developed and
plenty of interesting results can be shared. Discovery researchers
constitute an important group within machine learning, driven by
specific interests, applications, and evaluation mechanisms. Machine
Discovery Workshop will be the place for them to gather and discuss
the specialized topics of the discovery research.

Several overlapping communities will have a chance to meet, including,
among others, those who work on scientific discovery, those who focus
on knowledge discovery in data bases, and those dealing with data
analysis and discovery of data dependencies.

The program will consist of paper presentations, panel discussion, and
demonstration of machine discovery systems. All accepted papers will
be published in the proceedings. Some of those will be presented
during the poster session.

Topics of interest include, but are not limited to:

- Scientific discovery: empirical discovery, data driven reasoning,
theory revision, discovery of quantitative laws, discovery of hidden
structure, experiment design and planning, theory driven reasoning,
domain applications and cognitive models;

- Discovery in databases: discovery of regularities and concepts,
discovery of data dependencies, discovery of causal relations, use
of domain knowledge;

- Automated data analysis: data and concept classification, combining
search with statistics, search for empirical equations;

- other: integrated and multiparadigm systems, exploration of
environment, evaluation mechanisms, domain-specific discovery
methods, mathematical discovery and discovery in abstract spaces.


REQUIREMENTS FOR SUBMISSION:

The papers must not exceed 10 single spaced pages, not counting
bibliography, but including abstract of 180-220 words.

Submissions in the category of demos: 3 page description of the system
plus a sample run of the system (up to 3 pages; commented), plus
answers to the questionnaire, mailed on request.

IMPORTANT DATES:

Submissions (on paper; in 4 copies) must arrive by March 31 to
Jan Zytkow
Computer Science Department
Wichita State University
Wichita, KS 67208

email: zytkow@wsuiar.wsu.ukans.edu
phone: 316-689-3178

Notifications of acceptance will be sent on April 29 (provide your
e-mail address, if possible).

Camera-ready copies must arrive by June 1.

PROGRAM COMMITTEE (and ORGANIZING COMMITTEE)

Peter Edwards University of Aberdeen, United Kingdom
Ken Haase MIT, USA
Jiawei Han Simon Fraser University, Canada
Peter Karp SRI International, USA
Willi Klosgen German National Research Center for CS
Yves Kodratoff Universite Paris-Sud, France
Deon Oosthuizen University of Pretoria, South Africa
Paul O'Rorke Univ.of California, Irvine, USA
Gregory Piatetsky-Shapiro GTE Laboratories, USA
Armand Prieditis Univ.of California, Davis, USA
Cullen Schaffer CUNY/Hunter College, USA
Derek Sleeman University of Aberdeen, United Kingdom
Raul Valdes-Perez Carnegie-Mellon, USA
Robert Zembowicz Wichita State Univ., USA
Wojciech Ziarko Univ. of Regina, Canada
Jan Zytkow Wichita State Univ. USA (workshop chair)

------------------------------

END of ML-LIST 3.23

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

lostcivilizations's profile picture
Lost Civilizations (@lostcivilizations)
6 Nov 2024
Thank you! I've corrected the date in the article. However, some websites list January 1980 as the date of death.

guest's profile picture
@guest
5 Nov 2024
Crespi died i april 1982, not january 1980.

guest's profile picture
@guest
4 Nov 2024
In 1955, the explorer Thor Heyerdahl managed to erect a Moai in eighteen days, with the help of twelve natives and using only logs and stone ...

guest's profile picture
@guest
4 Nov 2024
For what unknown reason did our distant ancestors dot much of the surface of the then-known lands with those large stones? Why are such cons ...

guest's profile picture
@guest
4 Nov 2024
The real pyramid mania exploded in 1830. A certain John Taylor, who had never visited them but relied on some measurements made by Colonel H ...

guest's profile picture
@guest
4 Nov 2024
Even with all the modern technologies available to us, structures like the Great Pyramid of Cheops could only be built today with immense di ...

lostcivilizations's profile picture
Lost Civilizations (@lostcivilizations)
2 Nov 2024
In Sardinia, there is a legend known as the Legend of Tirrenide. Thousands of years ago, there was a continent called Tirrenide. It was a l ...

guest's profile picture
@guest
2 Nov 2024
What is certain is that the first Greek geographer to clearly place the Pillars of Hercules at Gibraltar was Eratosthenes (who lived between ...

guest's profile picture
@guest
1 Nov 2024
Disquieting thc drinks has been quite the journey. As someone keen on unpretentious remedies, delving into the in every respect of hemp has ...

guest's profile picture
@guest
29 Oct 2024
hi Good day I am writing to inform you of recent developments that may impact our ongoing operations. This morning, global news outlets hav ...
Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT