Technical Description Use Case VFU-ELF: Unterschied zwischen den Versionen

Aus Semantic CorA
Wechseln zu: Navigation, Suche
Zeile 3: Zeile 3:
 
=Research Data=
 
=Research Data=
 
[[File:Extension_OfflineImportLexicon.PNG|right|thumb|250px|OfflineImportLexicon]]
 
[[File:Extension_OfflineImportLexicon.PNG|right|thumb|250px|OfflineImportLexicon]]
In both research projects the examined data were educational lexica. A lexicon can be subdivided into three more entities: a volume, an article (lemma) and the image file of the digitalization. Each entity has its own properties (see image "Entities for Lexica") . On this basis, we created a data model which made it possible to describe the given data.  
+
In both research projects the examined data are educational lexica. A lexicon can be subdivided four more entities: a lexicon otself, a volume, an article (lemma) and the image file. Each entity has its own properties (see image "Entities for Lexica"). On this basis, we created a data model which made it possible to describe the given data.  
  
 
==Import and Handling==
 
==Import and Handling==
Zeile 51: Zeile 51:
 
}}
 
}}
 
</pre>
 
</pre>
 
 
 
  
 
The [[Template:Cora_Lemmata_Data|Cora Lemmata Data]] template is called through the following format:
 
The [[Template:Cora_Lemmata_Data|Cora Lemmata Data]] template is called through the following format:
Zeile 89: Zeile 86:
 
}}
 
}}
 
</pre>
 
</pre>
 +
 +
== Mapping with Semantic Web Vocabularies  ==
 +
SMW offers per default already the [https://semantic-mediawiki.org/wiki/Help:Import_vocabulary import of vocabularies]. To describe the four entitie of the lexicon (lexicon, volume, lemma, image) and additioanlly further main entities like person, corporate bodies and concepts, we used the follwoing range of vocabularies:
 +
* [[MediaWiki:Smw_import_bibo]]
 +
* [[MediaWiki:Smw_import_gnd]]
 +
* [[MediaWiki:Smw_import_rel]]
 +
* [[MediaWiki:Smw_import_skos]]
 +
* [[MediaWiki:Smw_import_foaf]]
 +
 +
By clicking the link the imported and used properties are shown in detail.
  
 
== Mainpage ==  
 
== Mainpage ==  

Version vom 29. April 2014, 19:06 Uhr

Semantic CorA was developed in close connection to the use case of analyzing educational lexica in research on the history of education. In this context, two research projects are realised in the vre.

Entities for Lexica

Research Data

OfflineImportLexicon

In both research projects the examined data are educational lexica. A lexicon can be subdivided four more entities: a lexicon otself, a volume, an article (lemma) and the image file. Each entity has its own properties (see image "Entities for Lexica"). On this basis, we created a data model which made it possible to describe the given data.

Import and Handling

We were able to import bibliographic metadata (describing the data down to the article level) and the digitized objects from Scripta Paedagogica Online (SPO) which was accessible via an OAI-Interface; in an automated process, nearly 22,000 articles from different lexica were imported that were described with the data model and were ready for analysis. To add more articles manually, we developed the extension Semantic_CorA_Extensions#OfflineImportLexicon OfflineImportLexicon which provided a standardized import routine for the researchers.

Data Templates of Lexica

The usage of templates offers a centralised data management. Using these feature, a baseline for the data import and display is provided. Following the described entities of a lexicon, each level of entity contains several data.

The Cora Lexicon Data template is called through the following format:

{{Cora Lexicon Data
| Title =
| Subtitle = 
| Place of Publication  =
| Editor = 
| Coordinator =
| Author = 
| Publisher = 
| Language = 
| Edition =
| URN =  
| Year of Publication =  
| IDBBF =  
| Has Volume = 
}}

The Cora Volume Data template is called through the following format:

{{Cora Volume Data
| Title =
| Subtitle = 
| Volume Number =
| Place of Publication  = 
| Publisher = 
| Editor =
| Language = 
| URN =  
| Year of Publication =  
| IDBBF =  
| Physical Description =  
| Part of Lexicon = 
| Numbering Offset = 
| Type of Numbering = 
}}

The Cora Lemmata Data template is called through the following format:

{{Cora Lemmata Data
| Title =
| Subtitle = 
| Part of Lexicon = 
| Part of Volume = 
| Language = 
| URN =  
| IDBBF = 
| Original Person =
| First Page =  
| Last Page =  
| Has Digital Image =
| Category = 
}}


The Cora Image Data template is called through the following format:

{{Cora Image Data
| Original URI =  
| Rights Holder =  
| URN = 
| Page Number = 
| Page Numbering = 
| Part of Article =  
| Part of TitlePage =
| Part of TOC = 
| Part of Preface =
}}

Mapping with Semantic Web Vocabularies

SMW offers per default already the import of vocabularies. To describe the four entitie of the lexicon (lexicon, volume, lemma, image) and additioanlly further main entities like person, corporate bodies and concepts, we used the follwoing range of vocabularies:

By clicking the link the imported and used properties are shown in detail.

Mainpage

At the mainpage central information of the currents projects and of the different research actions are offered (Recent changes, changed lexica, projects, help needed, timeline of lexica etc.). Hetre is a deafault mainpage which offers a two coloumn page Main_Page_Default. Please have as well a look at the design templates Templeate:Box1 and Template:Box2 and the adjusted skin.

Mainpage of VRE ELF for Educational Lexica Research