Difference between revisions of "Technical Description Use Case VFU-ELF"

From Semantic CorA
Jump to: navigation, search
Line 2: Line 2:
  
 
=Research Data=
 
=Research Data=
In both research projects the examined data were educational lexica. A lexicon can be subdivided into three more entities: a volume, an article (lemma) and the image file of the digitalization. Each entity has its own properties (see image "Entities for Lexica") . On this basis, we created a data model which made it possible to describe the given data. [[Datei:entities_CorA.png|right|thumb|400px|Entities for Lexica]]
+
In both research projects the examined data were educational lexica. A lexicon can be subdivided into three more entities: a volume, an article (lemma) and the image file of the digitalization. Each entity has its own properties (see image "Entities for Lexica") . On this basis, we created a data model which made it possible to describe the given data. [[File:entities_CorA.png|right|thumb|400px|Entities for Lexica]]
 
==Import and Handling==
 
==Import and Handling==
We were able to import bibliographic metadata (describing the data down to the article level) and the digitized objects from [http://bbf.dipf.de/digitale-bbf/scripta-paedagogica-online/digitalisierte-nachschlagewerke Scripta Paedagogica Online (SPO)] which was accessible via an OAI-Interface; in an automated process, nearly 22,000 articles from different lexica were imported that were described with the data model and were ready for analysis. To add more articles manually, we developed the extension [http://wiki.bildungsserver.de/SMW-CorA/index.php/Semantic_CorA_Extensions#OfflineImportLexicon OfflineImportLexicon] which provided a standardized import routine for the researchers.
+
We were able to import bibliographic metadata (describing the data down to the article level) and the digitized objects from [http://bbf.dipf.de/digitale-bbf/scripta-paedagogica-online/digitalisierte-nachschlagewerke Scripta Paedagogica Online (SPO)] which was accessible via an OAI-Interface; in an automated process, nearly 22,000 articles from different lexica were imported that were described with the data model and were ready for analysis. To add more articles manually, we developed the extension [[Semantic_CorA_Extensions#OfflineImportLexicon OfflineImportLexicon]] which provided a standardized import routine for the researchers.
  
 
==Templates & Forms==
 
==Templates & Forms==

Revision as of 17:05, 6 March 2014

Semantic CorA was developed in close connection to the use case of analyzing educational lexica in research on the history of education. In this context, two resdearcher are working on their doctzoral theses in the vre.

Research Data

In both research projects the examined data were educational lexica. A lexicon can be subdivided into three more entities: a volume, an article (lemma) and the image file of the digitalization. Each entity has its own properties (see image "Entities for Lexica") . On this basis, we created a data model which made it possible to describe the given data.
Entities for Lexica

Import and Handling

We were able to import bibliographic metadata (describing the data down to the article level) and the digitized objects from Scripta Paedagogica Online (SPO) which was accessible via an OAI-Interface; in an automated process, nearly 22,000 articles from different lexica were imported that were described with the data model and were ready for analysis. To add more articles manually, we developed the extension Semantic_CorA_Extensions#OfflineImportLexicon OfflineImportLexicon which provided a standardized import routine for the researchers.

Templates & Forms

The usage of templates and semantic forms is very important for the data management. Using these features, a baseline for the data import and display is provided which serves completeness and usability. Example of template for data storage:

{| class="wikitable"
 ! Titel
 | [[Titel::{{{Titel|}}}]]
 |-
 ! Verfasser
 | [[Verfasser::{{{Verfasser|}}}]]
 |-
 ! Type
 | [[Type::{{{Type|}}}]]
 |}