[Humanist] 31.91 events: crowdsourcing; words & structures; computational history

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Thu Jun 8 07:10:03 CEST 2017

                  Humanist Discussion Group, Vol. 31, No. 91.
            Department of Digital Humanities, King's College London
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    Marten DURING <marten.during at uni.lu>                      (59)
        Subject: CfP: 4th Workshop on Computational History
                (HistoInformatics2017) - November 6, 2017, Singapore

  [2]   From:    Carmen Brando <carmen.brando at gmail.com>                  (110)
        Subject: SECOND CFP: Workshop on Language, Ontology, Terminology and
                Knowledge Structures (LOTKS - 2017)

  [3]   From:    Gabriel BODARD <gabriel.bodard at SAS.AC.UK>                 (30)
        Subject: [DIGITALCLASSICIST] Seminar: Crowdsourcing a digital library
                of pre-modern Chinese

        Date: Wed, 7 Jun 2017 11:59:08 +0000
        From: Marten DURING <marten.during at uni.lu>
        Subject: CfP: 4th Workshop on Computational History (HistoInformatics2017) - November 6, 2017, Singapore

4th Workshop on Computational History (HistoInformatics2017) - 
November 6, 2017, Singapore

Held in conjunction with the 26th ACM International Conference on Information and Knowledge Management (CIKM 2017), 6-10 November, Singapore.


The HistoInformatics workshop series brings together researchers in the historical disciplines, computer science and associated disciplines as well as the cultural heritage sector. Historians, like other humanists show keen interests in computational approaches to the study and processing of digitized sources (usually text, images, audio). In computer science, experimental tools and methods stand the challenge to be validated regarding their relevance for real-world questions and applications. The HistoInformatics workshop series is designed to bring researchers in both fields together, to discuss best practices as well as possible future collaborations.

Traditionally, historical research is based on the hermeneutic investigation of preserved records and artefacts to provide a reliable account of the past and to discuss different hypotheses. Alongside this hermeneutic approach historians have always been interested to translate primary sources into data and used methods, often borrowed from the social sciences, to analyze them. A new wealth of digitized historical documents have however opened up completely new challenges for the computer-assisted analysis of e.g. large text or image corpora. Historians can greatly benefit from the advances of computer and information sciences which are dedicated to the processing, organization and analysis of such data. New computational techniques can be applied to help verify and validate historical assumptions. We call this approach HistoInformatics, analogous to Bioinformatics and ChemoInformatics which have respectively proposed new research trends in biology and chemistry. The main topics of the workshop are:(1) support for historical research and analysis in general through the application of Computer Science theories or technologies, (2) analysis and re-use of historical texts, (3) analysis of collective memories, (4) visualizations of historical data, (4) access to large wealth of accumulated historical knowledge.

HistoInformatics workshops took place thrice in the past. The first one (http://www.dl.kuis.kyoto-u.ac.jp/histoinformatics2013/) was held in conjunction with the 5th International Conference on Social Informatics in Kyoto, Japan in 2013. The second workshop (http://www.dl.kuis.kyoto-u.ac.jp/histoinformatics2014/) took place at the same conference in the following year in Barcelona. The third workshop (http://www.dl.kuis.kyoto-u.ac.jp/histoinformatics2016/) was held on July 2016 in Krakow, Poland in conjunction with ADHO’s 2016 Digital Humanities conference.

For Histoinformatics2017, we are interested in a wide range of topics which are of relevance for history, the cultural heritage sector and the humanities in general. Topics of interest include (but are not limited to):

-Natural language processing and text analytics applied to historical documents
-Analysis of longitudinal document collections
-Search and retrieval in document archives and historical collections, associative search
-Causal relationship discovery based on historical resources
-Named entity recognition and disambiguation
-Entity relationship extraction, detecting and resolving historical references in text
-Finding analogical entities over time
-Computational linguistics for old texts
-Analysis of language change over time
-Digitizing and archiving
-Modeling evolution of entities and relationships over time
-Automatic multimedia document dating
-Applications of Artificial Intelligence techniques to History
-Simulating and recreating the past course of actions, social relations, motivations, figurations
-Handling uncertain and fragmentary text and image data
-Automatic biography generation
-Mining Wikipedia for historical data
-OCR and transcription of old texts
-Effective interfaces for searching, browsing or visualizing historical data collections
-Studies on collective memory
-Studying and modeling forgetting and remembering processes
-Estimating credibility of historical findings
-Probing the limits of Histoinformatics
-Epistemologies in the Humanities and Computer Science

Practical matters

Paper submission deadline: July 15, 2017 (23:59 Hawaii Standard Time)
Notification of acceptance: August 12, 2017
Camera ready copy deadline: August 19, 2017
Workshop date: November 6, 2017

Submissions need to be:

- formatted according to ACM camera-ready template (http://www.acm.org/publications/proceedings-template).
- submitted in English in PDF format at the workshops Easychair page (https://easychair.org/conferences/?conf=histoinformatics2017)

Full paper submissions must describe substantial, original, completed and unpublished work, not accepted for publication elsewhere, and not currently under review elsewhere. Long papers may consist of up to eight (8) pages of content including references and figures. Short paper submissions must describe small and focused contribution. Short papers may consist of up to four (4) pages (including references and figures). Accepted papers will be published on CEUR Workshop Proceedings (http://ceur-ws.org/).


For any inquiries, please contact the organizing committee at mohammed.hasanuzzaman at adaptcentre.ie / histoinformatics2017 at easychair.org


Dr Marten Düring

Maison des Sciences Humaines
11, Porte des Sciences
Room 4.146
L-4366 Esch-sur-Alzette
T +352 46 66 44 9029


        Date: Wed, 7 Jun 2017 14:12:09 +0200
        From: Carmen Brando <carmen.brando at gmail.com>
        Subject: SECOND CFP: Workshop on Language, Ontology, Terminology and Knowledge Structures (LOTKS - 2017)

Workshop on Language, Ontology, Terminology and Knowledge Structures
(LOTKS - 2017) 

In conjunction with the 12th International Conference on Computational
Semantics (IWCS), 19th September, 2017 Montpellier (France)

Website: https://langandonto.github.io/LangOnto-TermiKS-2017/

Paper submissions due: 10th July 2017

Workshop Description

This workshop, the second of a joint series, will bring together two
closely related strands of research. On the one hand it will look at the
overlap between ontologies and computational linguistics; and on the other
the relationship between knowledge modelling and terminologies -- as well
as the many points of intersection between these two topics.

Languages and Ontologies:

Formal ontologies are taking on an increasingly important role in
computational linguistics and automated language processing. Knowledge
models and ontologies are of interest to several areas of NLP including,
but not limited to, Machine Translation, Question Answering, and Word Sense
Disambiguation. At a more abstract level ontologies can help us to model
and reason about natural language semantics. They can be also used for the
organisation and formalisation of linguistically relevant categories such
as those used in tagsets for corpus annotation. At the same time, the fact
that formal ontologies are being increasingly accessed by users with a
limited or with no background in formal logic has led to a growing interest
in the development of front ends that allow for the easy editing, querying
and summarisation of such resources; it has also led to work in developing
natural language interfaces for authoring and for evaluating ontologies.
Another area that is now beginning to receive more attention is the
application of ontologies and taxonomies to the annotation and study of
literary texts, as well as of texts more generally in the humanities. This
is closely related to the ontology-enhanced modelling of lexicographic
resources, another topic which is gaining in popular.

This brings us to the field of terminology as a linguistic field, where in
recent years there has been a shift from merely compiling specialized
lexicographic resources to exploring terminology as a tool for structuring
knowledge in a given domain. As such, this has led to more intelligent ways
of accessing, extracting, representing, modelling, visualising and
transferring knowledge. Numerous tools for the automatic extraction of
terms, term variants, knowledge-rich contexts, definitions, semantic
relations, and taxonomies from specialized corpora have been developed for
a number of languages and new theoretical approaches have emerged as
potential frameworks for the study of specialized communication. However,
the building of adequate knowledge models for practitioners (e.g. experts,
researchers, translators, teachers etc.), on the one hand, and for use by
NLP applications (including cross-language, cross-domain, cross-device,
multimodal, multi-platform applications) on the other, still remains a
challenge. LOTKS will provide a forum for discussion on how to best bridge
these two sets of requirements.

Motivation and Topics of Interest

This workshop welcomes contributions from researchers in fields such as
linguistics, terminologies, and knowledge engineering, whose work fits in
with our topics of interest as well as interested industry professionals.
Building on the success both of the 1st LangandOnto workshop (co-located
with ICWS 2015) as well as last year’s joint LangandOnto/TermiKS workshop
(co-located with LREC 2016), this workshop aims to create a forum for open
discussion that will help to highlight the common areas of interest in the
different fields concerned, as well as fostering dialogue between the
various different approaches taken by each discipline. And therefore we
particularly welcome approaches with a cross-language, cross-domain and/or
cross-interdisciplinary scope.

Topics of interest include but are not limited to:

-- NLP-driven ontology modelling
-- The use of ontologies to structure linguistic tagsets
-- Natural language interfaces to ontologies
-- Ontologies for NLP tasks (e.g. textual entailment, summarisation, word
   sense disambiguation) and Information Retrieval
-- Lexical Ontologies
-- The use of ontologies in analysing/studying literary texts
-- Ontology-driven natural language generation
-- Linguistic, cognitive, psycholinguistic, sociolinguistic, computational
   and hybrid approaches to knowledge modelling
-- Construction of terminological knowledge bases
-- Terminology modelling for MT
-- Knowledge extraction from user-generated content
-- Frame-based approaches to knowledge extraction and representation
-- Building knowledge resources for less-resourced domains and languages
-- Visual components of specialized knowledge bases
-- Visualisation techniques for knowledge representations
-- Term variation and knowledge representations
-- NLP applications for terminology management
-- Terminologies in the Digital Humanities


We invite proposals in the form of abstracts of up to 6 pages (up to 4
pages of text +2 pages for references) for short papers, or up to 8 pages
(up to 6 pages of text+ 2 pages for references) for long papers. Accepted
workshop papers will be published together with the general program papers.

Follow the formatting guidelines for the IWCS general program, which can be
found at: https://www.lirmm.fr/iwcs2017/iwcs_instructions.php

Submission via Easychair at https://easychair.org/

Camera ready - Requirements

Final paper format: up to 10 pages (8 pages of text + 2 of references).

Accepted workshop papers will be published together with the general
program papers.

Important dates

Paper submissions due: 10th July 2017
Paper notification of acceptance: 31st July 2017
Camera-ready papers due: 4th September 2017
Workshop: 19th September 2017

For all enquiries please contact: langandonto at gmail.com

The Organising Committee

Francesca Frontini, Université Paul-Valéry Montpellier 3 - Praxiling (
francesca.frontini at univ-montp3.fr)
Larisa Grčić Simeunović, University of Zadar (lgrcic at unizd.hr)
Fahad Khan, Istituto di Linguistica Computazionale "A. Zampolli" - CNR,
Italy (fahad.khan at ilc.cnr.it)
Artemis Parvizi, Oxford University Press, UK (Artemis.Parvizi at oup.com)
Špela Vintar, University of Ljubljana, Slovenia (spela.vintar at ff.uni-lj.si)

Carmen Brando, PhD
Ecole des hautes études en sciences sociales
54 boulevard Raspail, Paris

        Date: Wed, 7 Jun 2017 11:37:43 +0100
        From: Gabriel BODARD <gabriel.bodard at SAS.AC.UK>
        Subject: [DIGITALCLASSICIST] Seminar: Crowdsourcing a digital library of pre-modern Chinese

Digital Classicist London seminar 2017

Donald Sturgeon (Harvard University)

*Crowdsourcing a digital library of pre-modern Chinese*

Friday June 9th at 16:30, in room 234, Senate House, Malet Street, 
London WC1E 7HU

Seminar will be livecast at Digital Classicist London YouTube channel: 

Rapid digitization of historical primary sources presents challenges to 
traditional models of digital library design along with opportunities 
for new approaches. This talk introduces the Chinese Text Project 
(ctext.org), a crowdsourced digital library of pre-modern Chinese 
designed to leverage a large, distributed user community to curate 
material in a scalable and decentralized way. This platform is used 
daily by over 25,000 users around the world, many of whom actively 
contribute to the development of its contents. Through use of open APIs, 
the platform also facilitates digital humanities research and teaching, 
as well as integration with externally developed projects and tools.


Dr Gabriel BODARD
Reader in Digital Classics

Institute of Classical Studies
University of London
Senate House
Malet Street
London WC1E 7HU

E: gabriel.bodard at sas.ac.uk
T: +44 (0)20 78628752


More information about the Humanist mailing list