Technical Description Use Case VFU-ELF: Unterschied zwischen den Versionen

Aus Semantic CorA
Wechseln zu: Navigation, Suche
Zeile 1: Zeile 1:
Semantic CorA was developed in relation of the use case of analyzing educational lexica in the domain of historical educational research. Two phd projects were doing their research while the vre was designed.
+
Semantic CorA was developed in close connection to the use case of analyzing educational lexica in research on the history of education. In this context, two resdearcher are working on their doctzoral theses in the vre.
  
 
=Research Data=
 
=Research Data=
In both research projects the examined data where educational lexica. A lexicon can be subdivided in three more entities, which are a volume, an article (also know as lemma) and the image file of the digitalization. Furthermore each entity has its own properties (see image "Entities for Lexica") which describe each of them. Based upon this facts we created a first data model which made it possible to describe the given data. [[Datei:entities_CorA.png|right|thumb|400px|Entities for Lexica]]
+
In both research projects the examined data were educational lexica. A lexicon can be subdivided into three more entities: a volume, an article (lemma) and the image file of the digitalization. Each entity has its own properties (see image "Entities for Lexica") . On this basis, we created a data model which made it possible to describe the given data. [[Datei:entities_CorA.png|right|thumb|400px|Entities for Lexica]]
 
==Import and Handling==
 
==Import and Handling==
We were able to import bibliographic metadata (describing the data down to the article level) and the digitalizations themselves from the  [http://bbf.dipf.de/digitale-bbf/scripta-paedagogica-online/digitalisierte-nachschlagewerke Scripta Paedagogica Online (SPO)]-service which was accesible via an OAI-Interface. So in an automated process nearly 22,000 articles from different lexica were imported, fitting our data model and ready for analyzation.
+
We were able to import bibliographic metadata (describing the data down to the article level) and the digitized objects from [http://bbf.dipf.de/digitale-bbf/scripta-paedagogica-online/digitalisierte-nachschlagewerke Scripta Paedagogica Online (SPO)] which was accessible via an OAI-Interface; in an automated process, nearly 22,000 articles from different lexica were imported that were described with the data model and were ready for analysis. To add more articles manually, we developed the extension [http://wiki.bildungsserver.de/SMW-CorA/index.php/Semantic_CorA_Extensions#OfflineImportLexicon OfflineImportLexicon] which provided a standardized import routine for the researchers.
For manual addition from more articles we developed the extension [http://wiki.bildungsserver.de/SMW-CorA/index.php/Semantic_CorA_Extensions#OfflineImportLexicon OfflineImportLexicon] which provided a standardized import routine for the researchers.
 
  
 
==Templates & Forms==
 
==Templates & Forms==

Version vom 5. März 2014, 20:43 Uhr

Semantic CorA was developed in close connection to the use case of analyzing educational lexica in research on the history of education. In this context, two resdearcher are working on their doctzoral theses in the vre.

Research Data

In both research projects the examined data were educational lexica. A lexicon can be subdivided into three more entities: a volume, an article (lemma) and the image file of the digitalization. Each entity has its own properties (see image "Entities for Lexica") . On this basis, we created a data model which made it possible to describe the given data.

Entities for Lexica

Import and Handling

We were able to import bibliographic metadata (describing the data down to the article level) and the digitized objects from Scripta Paedagogica Online (SPO) which was accessible via an OAI-Interface; in an automated process, nearly 22,000 articles from different lexica were imported that were described with the data model and were ready for analysis. To add more articles manually, we developed the extension OfflineImportLexicon which provided a standardized import routine for the researchers.

Templates & Forms

The usage of templates and semantic forms is very important for the data management. Using these features, a baseline for the data import and display is provided which serves completeness and usability. Example of template for data storage:

{| class="wikitable"
 ! Titel
 | [[Titel::{{{Titel|}}}]]
 |-
 ! Verfasser
 | [[Verfasser::{{{Verfasser|}}}]]
 |-
 ! Type
 | [[Type::{{{Type|}}}]]
 |}