RonPub

Loading...

RonPub Banner

RonPub -- Research Online Publishing

RonPub (Research online Publishing) is an academic publisher of online, open access, peer-reviewed journals.  RonPub aims to provide a platform for researchers, developers, educators, and technical managers to share and exchange their research results worldwide.

RonPub Is Open Access:

RonPub publishes all of its journals under the open access model, defined under BudapestBerlin, and Bethesda open access declarations:

  • All articles published by RonPub is fully open access and online available to readers free of charge.  
  • All open access articles are distributed under  Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction free of charge in any medium, provided that the original work is properly cited. 
  • Authors retain all copyright to their work.
  • Authors may also publish the publisher's version of their paper on any repository or website. 

RonPub Is Cost-Effective:

To be able to provide open access journals, RonPub defray publishing cost by charging a one-time publication fee for each accepted article. One of RonPub objectives is providing a fast and high-quality but lower-cost publishing service. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors who do not have funds to cover publication fees. We also offer a partial fee waiver for editors and reviewers of RonPub as as reward for their work. See the respective Journal webpage for the concrete publication fee.

RonPub Publication Criteria

What we are most concerned about is the quality, not quantity, of publications. We only publish high-quality scholarly papers. Publication Criteria describes the criteria that should be met for a contribution to be acceptable for publication in RonPub journals.

RonPub Publication Ethics Statement:

In order to ensure the publishing quality and the reputation of the publisher, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

RonPub follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Long-Term Preservation in the German National Library

Our publications are archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete.

Where is RonPub?

RonPub is a registered corporation in Lübeck, Germany. Lübeck is a beautiful coastal city, owing wonderful sea resorts and sandy beaches as well as good restaurants. It is located in northern Germany and is 60 kilometer away from Hamburg.

OJDB Cover
Open Journal of Databases (OJDB)
OJDB, an open access and peer-reviewed online journal, publishes original and creative research results on database technologies. OJDB distributes its articles under the open access model. All articles of OJDB are fully open access and online available to readers free of charge. There is no restriction on the length of the papers. Accepted manuscripts are published online immediately.
Publisher: RonPub UG (haftungsbeschränkt), Lübeck, Germany
Contact: OJDB Editorial Office
ISSN: 2199-3459
Call for Papers: txtUTF-8 txtASCII pdf
OJDB Cover
Open Journal of Databases (OJDB)
OJDB, an open access and peer-reviewed online journal, publishes original and creative research results on database technologies. OJDB distributes its articles under the open access model. All articles of OJDB are fully open access and online available to readers free of charge. There is no restriction on the length of the papers. Accepted manuscripts are published online immediately.
Publisher: RonPub UG (haftungsbeschränkt), Lübeck, Germany
Contact: OJDB Editorial Office
ISSN: 2199-3459
Call for Papers: txtUTF-8 txtASCII pdf

Aims & Scope

Open Journal of Databases (OJDB) provides a platform for researchers and practitioners of databases to share their ideas, experiences and research results. OJDB publishes following four types of scientific articles:

  • Short communications: reporting novel research ideas. The work represented should be technically sound and significantly advancing the state of the art. Short communications also include exploratory studies and methodological articles.
  • Regular research papers: being full original findings with adequate experimental research. They make substantial theoretical and empirical contributions to the research field.  Research papers should be written as concise as possible.
  • Research reviews: being insightful and accessible overview of a certain field of research. They conceptualize research issues, synthesize existing findings and advance the understanding of the field. They may also suggest new research issues and directions.
  • Visionary papers:  identify new research issues and future research directions, and describe new research visions in the field. The new visions will potentially have great impact for the future society and daily life. 

OJDB welcomes scientific papers in all the traditional and emerging areas of database research. There is no restriction on the length of the papers.

Topics relevant to this journal include, but are NOT limited to:

  • Core Data Management Technologies for Large-Scale Data Processing
    • New database architectures
    • Storage 
    • Transactions
    • Replication and consistency
    • Recovery
    • Physical representations/indexing
    • Query processing and optimization
    • Availability
    • Adaptivity and self-tuning
    • Power management
    • Virtualization
    • Data privacy and security
    • Parallel databases
    • Distributed databases
  • Data Models and Languages
    • XML and semi-structured data and queries
    • Semantic Web
    • Multi-media, temporal and spatial data and queries
    • Declarative languages
    • Simple-to-use language interfaces for data access
  • Domain-Specific Data Management
    • Mobile databases
    • Ubiquitous computing
    • Sensors databases
    • Internet of Things databases
    • Social Networks
  • Web and Heterogeneous Data
    • Information and knowledge extraction
    • Integration of data and services
    • Meta-data management
    • Data quality
    • Service-oriented architectures
  • Data Visualizations and Analysis
  • Emerging Database Technologies and Non-Standard Databases
    • NoSQL and NewSQL
    • Web databases
    • Semantic Web
    • Cloud databases
    • Deductive databases
    • Object-oriented databases
  • Performance and Evaluation
    • Benchmarks
    • Experimental methodology
    • Experimental analysis of existing complex systems and approaches

Author Guidelines

Publication Criteria

Publication Criteria provides important information for authors to prepare their manuscripts with a high possibility of being accepted.

Manuscript Preparation

Please prepare your manuscripts using the manuscript template of the journal. It is available for download as word doc docx and latex version zip. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts following the information on the submit pageAuthors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as a PDF file and a word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts. Subsequent enquiries concerning paper progress should be sent to the email address of the journal.

Review Procedure

OJDB is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in OJDB are strictly and thoroughly peer-reviewed. When a manuscript is submitted, the editor-in-chief assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A second round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

Publication Ethics Statement

In order to ensure the publishing quality and the reputation of the journal, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

Our journal follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Editor-in-Chief

Fabio Grandi, University of Bologna, Italy

Editors

Mírian Halfeld Ferrari Alves, Université d'Orléans, France

Mithun Balakrishna, Lymba Corporation, USA

Jorge Bernardino, CISUC-Polytechnic Institute of Coimbra, Portugal

Nikos Bikakis, University of Ioannina, Greece

Carlos Pampulim Caldeira, University of Evora, Portugal

Barbara Catania, University of Genova, Italy

En Cheng, University of Akron, USA

Eliseo Clementini, University of L'Aquila, Italy

Jérôme Darmont, Université de Lyon, France

Bipin C. Desai, Concordia University, Canada

Sven Groppe, University of Lübeck, Germany

Le Gruenwald, University of Oklahoma, USA

Giovanna Guerrini, Universita' di Genova, Italy

Abdessamad Imine, Lorraine University, France

Peiquan Jin, University of Science and Technology of China, China

Verena Kantere, University of Geneva, Switzerland

Carsten Kleiner, University of Applied Sciences and Arts Hanover, Germany

Josep L. Larriba-Pey, DAMA-UPC, Barcelona, Spain

Daniel Lemire, Université du Québec, Canada

Jan Lindström, SkySQL, Finland

Chuan-Ming Liu, National Taipei University of Technology, Taiwan

Pericles Loucopoulos, University of Manchester, United Kingdom

Riccardo Martoglia, University of Modena and Reggio Emilia, Italy

Cédric du Mouza, Conservatoire National des Arts et Métiers, France

Eric Pardede, La Trobe University, Australia

Elaheh Pourabbas, National Research Council, Italy

Ismael Sanz, Universitat Jaume I, Castelló de la Plana, Spain

Klaus-Dieter Schewe, Software Competence Center Hagenberg (SCCH), Austria

Theodoros Tzouramanis, University of the Aegean, Greece

Marco Vieira, University of Coimbra, Portugal

Yingwei Wang, University of Prince Edward Island, Canada

John (Junhu) Wang, Griffith University, Australia

Articles of OJDB

Archive
Hide Archive Menu
Search Articles in OJDB

 Open Access 

Quasi-Convex Scoring Functions in Branch-and-Bound Ranked Search

Peter Poensgen, Ralf Möller

Open Journal of Databases (OJDB), 7(1), Pages 1-11, 2020, Downloads: 3232, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-2019092919333113374958 | GNL-LP: 1195986181 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: For answering top-k queries in which attributes are aggregated to a scalar value for defining a ranking, usually the well-known branch-and-bound principle can be used for efficient query answering. Standard algorithms (e.g., Branch-and-Bound Ranked Search, BRS for short) require scoring functions to be monotone, such that a top-k ranking can be computed in sublinear time in the average case. If monotonicity cannot be guaranteed, efficient query answering algorithms are not known. To make branch-and-bound effective with descending or ascending rankings (maximum top-k or minimum top-k queries, respectively), BRS must be able to identify bounds for exploring search partitions, and only for monotonic ranking functions this is trivial. In this paper, we investigate the class of quasi-convex functions used for scoring objects, and we examine how bounds for exploring data partitions can correctly and efficiently be computed for quasi-convex functions in BRS for maximum top-k queries. Given that quasi-convex scoring functions can usefully be employed for ranking objects in a variety of applications, the mathematical findings presented in this paper are indeed significant for practical top-k query answering.

BibTex:

    @Article{OJDB_2020v7i1n01_Poensgen,
        title     = {Quasi-Convex Scoring Functions in Branch-and-Bound Ranked Search},
        author    = {Peter Poensgen and
                     Ralf M{\"o}ller},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2020},
        volume    = {7},
        number    = {1},
        pages     = {1--11},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2019092919333113374958},
        urn       = {urn:nbn:de:101:1-2019092919333113374958},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {For answering top-k queries in which attributes are aggregated to a scalar value for defining a ranking, usually the well-known branch-and-bound principle can be used for efficient query answering. Standard algorithms (e.g., Branch-and-Bound Ranked Search, BRS for short) require scoring functions to be monotone, such that a top-k ranking can be computed in sublinear time in the average case. If monotonicity cannot be guaranteed, efficient query answering algorithms are not known. To make branch-and-bound effective with descending or ascending rankings (maximum top-k or minimum top-k queries, respectively), BRS must be able to identify bounds for exploring search partitions, and only for monotonic ranking functions this is trivial. In this paper, we investigate the class of quasi-convex functions used for scoring objects, and we examine how bounds for exploring data partitions can correctly and efficiently be computed for quasi-convex functions in BRS for maximum top-k queries. Given that quasi-convex scoring functions can usefully be employed for ranking objects in a variety of applications, the mathematical findings presented in this paper are indeed significant for practical top-k query answering.}
    }

 Open Access 

Branch-and-Bound Ranked Search by Minimizing Parabolic Polynomials

Peter Poensgen, Ralf Möller

Open Journal of Databases (OJDB), 7(1), Pages 12-20, 2020, Downloads: 2582

Full-Text: pdf | URN: urn:nbn:de:101:1-2020080219330783060411 | GNL-LP: 1215016786 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The Branch-and-Bound Ranked Search algorithm (BRS) is an efficient method for answering top-k queries based on R-trees using multivariate scoring functions. To make BRS effective with ascending rankings, the algorithm must be able to identify lower bounds of the scoring functions for exploring search partitions. This paper presents BRS supporting parabolic polynomials. These functions are common to minimize combined scores over different attributes and cover a variety of applications. To the best of our knowledge the problem to develop an algorithm for computing lower bounds for the BRS method has not been well addressed yet.

BibTex:

    @Article{OJDB_2020v7i1n02_Poensgen,
        title     = {Branch-and-Bound Ranked Search by Minimizing Parabolic Polynomials},
        author    = {Peter Poensgen and
                     Ralf M{\"o}ller},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2020},
        volume    = {7},
        number    = {1},
        pages     = {12--20},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2020080219330783060411},
        urn       = {urn:nbn:de:101:1-2020080219330783060411},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The Branch-and-Bound Ranked Search algorithm (BRS) is an efficient method for answering top-k queries based on R-trees using multivariate scoring functions. To make BRS effective with ascending rankings, the algorithm must be able to identify lower bounds of the scoring functions for exploring search partitions. This paper presents BRS supporting parabolic polynomials. These functions are common to minimize combined scores over different attributes and cover a variety of applications. To the best of our knowledge the problem to develop an algorithm for computing lower bounds for the BRS method has not been well addressed yet.}
    }

 Open Access 

Special Issue on High-Level Declarative Stream Processing

Patrick Koopmann, Theofilos Mailis, Danh Le Phuoc

Open Journal of Databases (OJDB), 6(1), Pages 1-4, 2019, Downloads: 4555

Full-Text: pdf | URN: urn:nbn:de:101:1-2018122318332165752519 | GNL-LP: 1174122706 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Stream processing as an information processing paradigm has been investigated by various research communities within computer science and appears in various applications: realtime analytics, online machine learning, continuous computation, ETL operations, and more. The special issue on "High-Level Declarative Stream Processing" investigates the declarative aspects of stream processing, a topic of undergoing intense study. It is published in the Open Journal of Web Technologies (OJWT) (www.ronpub.com/ojwt). This editorial provides an overview over the aims and the scope of the special issue and the accepted papers.

BibTex:

    @Article{OJDB_2019v6i1n01e_HiDeSt2018,
        title     = {Special Issue on High-Level Declarative Stream Processing},
        author    = {Patrick Koopmann and
                     Theofilos Mailis and
                     Danh Le Phuoc},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2019},
        volume    = {6},
        number    = {1},
        pages     = {1--4},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018122318332165752519},
        urn       = {urn:nbn:de:101:1-2018122318332165752519},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Stream processing as an information processing paradigm has been investigated by various research communities within computer science and appears in various applications: realtime analytics, online machine learning, continuous computation, ETL operations, and more. The special issue on "High-Level Declarative Stream Processing" investigates the declarative aspects of stream processing, a topic of undergoing intense study. It is published in the Open Journal of Web Technologies (OJWT) (www.ronpub.com/ojwt). This editorial provides an overview over the aims and the scope of the special issue and the accepted papers.}
    }

 Open Access 

Provenance Management over Linked Data Streams

Qian Liu, Marcin Wylot, Danh Le Phuoc, Manfred Hauswirth

Open Journal of Databases (OJDB), 6(1), Pages 5-20, 2019, Downloads: 5052, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-2018122318333313711079 | GNL-LP: 1174122722 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Provenance describes how results are produced starting from data sources, curation, recovery, intermediate processing, to the final results. Provenance has been applied to solve many problems and in particular to understand how errors are propagated in large-scale environments such as Internet of Things, Smart Cities. In fact, in such environments operations on data are often performed by multiple uncoordinated parties, each potentially introducing or propagating errors. These errors cause uncertainty of the overall data analytics process that is further amplified when many data sources are combined and errors get propagated across multiple parties. The ability to properly identify how such errors influence the results is crucial to assess the quality of the results. This problem becomes even more challenging in the case of Linked Data Streams, where data is dynamic and often incomplete. In this paper, we introduce methods to compute provenance over Linked Data Streams. More specifically, we propose provenance management techniques to compute provenance of continuous queries executed over complete Linked Data streams. Unlike traditional provenance management techniques, which are applied on static data, we focus strictly on the dynamicity and heterogeneity of Linked Data streams. Specifically, in this paper we describe: i) means to deliver a dynamic provenance trace of the results to the user, ii) a system capable to execute queries over dynamic Linked Data and compute provenance of these queries, and iii) an empirical evaluation of our approach using real-world datasets.

BibTex:

    @Article{OJDB_2019v6i1n02_QianLiu,
        title     = {Provenance Management over Linked Data Streams},
        author    = {Qian Liu and
                     Marcin Wylot and
                     Danh Le Phuoc and
                     Manfred Hauswirth},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2019},
        volume    = {6},
        number    = {1},
        pages     = {5--20},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018122318333313711079},
        urn       = {urn:nbn:de:101:1-2018122318333313711079},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Provenance describes how results are produced starting from data sources, curation, recovery, intermediate processing, to the final results. Provenance has been applied to solve many problems and in particular to understand how errors are propagated in large-scale environments such as Internet of Things, Smart Cities. In fact, in such environments operations on data are often performed by multiple uncoordinated parties, each potentially introducing or propagating errors. These errors cause uncertainty of the overall data analytics process that is further amplified when many data sources are combined and errors get propagated across multiple parties. The ability to properly identify how such errors influence the results is crucial to assess the quality of the results. This problem becomes even more challenging in the case of Linked Data Streams, where data is dynamic and often incomplete. In this paper, we introduce methods to compute provenance over Linked Data Streams. More specifically, we propose provenance management techniques to compute provenance of continuous queries executed over complete Linked Data streams. Unlike traditional provenance management techniques, which are applied on static data, we focus strictly on the dynamicity and heterogeneity of Linked Data streams. Specifically, in this paper we describe: i) means to deliver a dynamic provenance trace of the results to the user, ii) a system capable to execute queries over dynamic Linked Data and compute provenance of these queries, and iii) an empirical evaluation of our approach using real-world datasets.}
    }

 Open Access 

Ontology-Based Data Access to Big Data

Simon Schiff, Ralf Möller, Özgür L. Özcep

Open Journal of Databases (OJDB), 6(1), Pages 21-32, 2019, Downloads: 11114, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-2018122318334350985847 | GNL-LP: 1174122730 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Recent approaches to ontology-based data access (OBDA) have extended the focus from relational database systems to other types of backends such as cluster frameworks in order to cope with the four Vs associated with big data: volume, veracity, variety and velocity (stream processing). The abstraction that an ontology provides is a benefit from the enduser point of view, but it represents a challenge for developers because high-level queries must be transformed into queries executable on the backend level. In this paper, we discuss and evaluate an OBDA system that uses STARQL (Streaming and Temporal ontology Access with a Reasoning-based Query Language), as a high-level query language to access data stored in a SPARK cluster framework. The development of the STARQL-SPARK engine show that there is a need to provide a homogeneous interface to access both static and temporal as well as streaming data because cluster frameworks usually lack such an interface. The experimental evaluation shows that building a scalable OBDA system that runs with SPARK is more than plug-and-play as one needs to know quite well the data formats and the data organisation in the cluster framework.

BibTex:

    @Article{OJDB_2019v6i1n03_Schiff,
        title     = {Ontology-Based Data Access to Big Data},
        author    = {Simon Schiff and
                     Ralf M{\"o}ller and
                     {\"O}zg{\"u}r L. {\"O}zcep},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2019},
        volume    = {6},
        number    = {1},
        pages     = {21--32},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018122318334350985847},
        urn       = {urn:nbn:de:101:1-2018122318334350985847},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Recent approaches to ontology-based data access (OBDA) have extended the focus from relational database systems to other types of backends such as cluster frameworks in order to cope with the four Vs associated with big data: volume, veracity, variety and velocity (stream processing). The abstraction that an ontology provides is a benefit from the enduser point of view, but it represents a challenge for developers because high-level queries must be transformed into queries executable on the backend level. In this paper, we discuss and evaluate an OBDA system that uses STARQL (Streaming and Temporal ontology Access with a Reasoning-based Query Language), as a high-level query language to access data stored in a SPARK cluster framework. The development of the STARQL-SPARK engine show that there is a need to provide a homogeneous interface to access both static and temporal as well as streaming data because cluster frameworks usually lack such an interface. The experimental evaluation shows that building a scalable OBDA system that runs with SPARK is more than plug-and-play as one needs to know quite well the data formats and the data organisation in the cluster framework.}
    }

 Open Access 

Multi-Shot Stream Reasoning in Answer Set Programming: A Preliminary Report

Philipp Obermeier, Javier Romero, Torsten Schaub

Open Journal of Databases (OJDB), 6(1), Pages 33-38, 2019, Downloads: 5127, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-2018122318335923776377 | GNL-LP: 1174122757 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In the past, we presented a first approach for stream reasoning using Answer Set Programming (ASP). At the time, we implemented an exhaustive wrapper for our underlying ASP system, clingo, to enable reasoning over continuous data streams. Nowadays, clingo natively supports multi-shot solving: a technique for processing continuously changing logic programs. In the context of stream reasoning, this allows us to directly implement seamless sliding-window-based reasoning over emerging data. In this paper, we hence present an exhaustive update to our stream reasoning approach that leverages multi-shot solving. We describe the implementation of the stream reasoner's architecture, and illustrate its workflow via job shop scheduling as a running example.

BibTex:

    @Article{OJDB_2019v6i1n04_Obermeier,
        title     = {Multi-Shot Stream Reasoning in Answer Set Programming: A Preliminary Report},
        author    = {Philipp Obermeier and
                     Javier Romero and
                     Torsten Schaub},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2019},
        volume    = {6},
        number    = {1},
        pages     = {33--38},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018122318335923776377},
        urn       = {urn:nbn:de:101:1-2018122318335923776377},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In the past, we presented a first approach for stream reasoning using Answer Set Programming (ASP). At the time, we implemented an exhaustive wrapper for our underlying ASP system, clingo, to enable reasoning over continuous data streams. Nowadays, clingo natively supports multi-shot solving: a technique for processing continuously changing logic programs. In the context of stream reasoning, this allows us to directly implement seamless sliding-window-based reasoning over emerging data. In this paper, we hence present an exhaustive update to our stream reasoning approach that leverages multi-shot solving. We describe the implementation of the stream reasoner's architecture, and illustrate its workflow via job shop scheduling as a running example.}
    }

 Open Access 

Effectiveness of NoSQL and NewSQL Databases in Mobile Network Event Data: Cassandra and ParStream/Kinetic

Petri Kotiranta, Marko Junkkari

Open Journal of Databases (OJDB), 5(1), Pages 1-13, 2018, Downloads: 4683

Full-Text: pdf | URN: urn:nbn:de:101:1-2018122318330989940385 | GNL-LP: 1174122692 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Continuously growing amount of data has inspired seeking more and more efficient database solutions for storing and manipulating data. In big data sets, NoSQL databases have been established as alternatives for traditional SQL databases. The effectiveness of these databases has been widely tested, but the tests focused only on key-value data that is structurally very simple. Many application domains, such as telecommunication, involve more complex data structures. Huge amount of Mobile Network Event (MNE) data is produced by an increasing number of mobile and ubiquitous applications. MNE data is structurally predetermined and typically contains a large number of columns. Applications that handle MNE data are usually insert intensive, as a huge amount of data are generated during rush hours. NoSQL provides high scalability and its column family stores suits MNE data well, but NoSQL does not support ACID features of the traditional relational databases. NewSQL is a new kind of databases, which provide the high scalability of NoSQL while still maintaining ACID guarantees of the traditional DBMS. In the paper, we evaluation NEM data storing and aggregating efficiency of Cassandra and ParStream/Kinetic databases and aim to find out whether the new kind of database technology can clearly bring performance advantages over legacy database technology and offers an alternative to existing solutions. Among the column family stores of NoSQL, Cassandra is especially a good choice for insert intensive applications due to its way to handle data insertions. ParStream is a novel and advanced NewSQL like database and is recently integrated into Cisco Kinetic. The results of the evaluation show that ParStream is much faster than Cassandra when storing and aggregating MNE data and the NewSQL is a very strong alternative to existing database solutions for insert intensive applications.

BibTex:

    @Article{OJDB_2018v5i1n01_Kotiranta,
        title     = {Effectiveness of NoSQL and NewSQL Databases in Mobile Network Event Data: Cassandra and ParStream/Kinetic},
        author    = {Petri Kotiranta and
                     Marko Junkkari},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2018},
        volume    = {5},
        number    = {1},
        pages     = {1--13},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018122318330989940385},
        urn       = {urn:nbn:de:101:1-2018122318330989940385},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Continuously growing amount of data has inspired seeking more and more efficient database solutions for storing and manipulating data. In big data sets, NoSQL databases have been established as alternatives for traditional SQL databases. The effectiveness of these databases has been widely tested, but the tests focused only on key-value data that is structurally very simple. Many application domains, such as telecommunication, involve more complex data structures. Huge amount of Mobile Network Event (MNE) data is produced by an increasing number of mobile and ubiquitous applications. MNE data is structurally predetermined and typically contains a large number of columns. Applications that handle MNE data are usually insert intensive, as a huge amount of data are generated during rush hours. NoSQL provides high scalability and its column family stores suits MNE data well, but NoSQL does not support ACID features of the traditional relational databases. NewSQL is a new kind of databases, which provide the high scalability of NoSQL while still maintaining ACID guarantees of the traditional DBMS. In the paper, we evaluation NEM data storing and aggregating efficiency of Cassandra and ParStream/Kinetic databases and aim to find out whether the new kind of database technology can clearly bring performance advantages over legacy database technology and offers an alternative to existing solutions. Among the column family stores of NoSQL, Cassandra is especially a good choice for insert intensive applications due to its way to handle data insertions. ParStream is a novel and advanced NewSQL like database and is recently integrated into Cisco Kinetic. The results of the evaluation show that ParStream is much faster than Cassandra when storing and aggregating MNE data and the NewSQL is a very strong alternative to existing database solutions for insert intensive applications.}
    }

 Open Access 

An NVM Aware MariaDB Database System and Associated IO Workload on File Systems

Jan Lindström, Dhananjoy Das, Nick Piggin, Santhosh Konundinya, Torben Mathiasen, Nisha Talagala, Dulcardo Arteaga

Open Journal of Databases (OJDB), 4(1), Pages 1-21, 2017, Downloads: 6812

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194662 | GNL-LP: 1132360927 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: MariaDB is a community-developed fork of the MySQL relational database management system and originally designed and implemented in order to use the traditional spinning disk architecture. With Non-Volatile memory (NVM) technology now in the forefront and main stream for server storage (Data centers), MariaDB addresses the need by adding support for NVM devices and introduces NVM Compression method. NVM Compression is a novel hybrid technique that combines application level compression with flash awareness for optimal performance and storage efficiency. Utilizing new interface primitives exported by Flash Translation Layers (FTLs), we leverage the garbage collection available in flash devices to optimize the capacity management required by compression systems. We implement NVM Compression in the popular MariaDB database and use variants of commonly available POSIX file system interfaces to provide the extended FTL capabilities to the user space application. The experimental results show that the hybrid approach of NVM Compression can improve compression performance by 2-7x, deliver compression performance for flash devices that is within 5% of uncompressed performance, improve storage efficiency by 19% over legacy Row-Compression, reduce data writes by up to 4x when combined with other flash aware techniques such as Atomic Writes, and deliver further advantages in power efficiency and CPU utilization. Various micro benchmark measurement and findings on sparse files call for required improvement in file systems for handling of punch hole operations on files.

BibTex:

    @Article{OJDB_2017v4i1n01_Lindstroem,
        title     = {An NVM Aware MariaDB Database System and Associated IO Workload on File Systems},
        author    = {Jan Lindstr{\"o}m and
                     Dhananjoy Das and
                     Nick Piggin and
                     Santhosh Konundinya and
                     Torben Mathiasen and
                     Nisha Talagala and
                     Dulcardo Arteaga},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {1--21},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194662},
        urn       = {urn:nbn:de:101:1-201705194662},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {MariaDB is a community-developed fork of the MySQL relational database management system and originally designed and implemented in order to use the traditional spinning disk architecture. With Non-Volatile memory (NVM) technology now in the forefront and main stream for server storage (Data centers), MariaDB addresses the need by adding support for NVM devices and introduces NVM Compression method. NVM Compression is a novel hybrid technique that combines application level compression with flash awareness for optimal performance and storage efficiency. Utilizing new interface primitives exported by Flash Translation Layers (FTLs), we leverage the garbage collection available in flash devices to optimize the capacity management required by compression systems. We implement NVM Compression in the popular MariaDB database and use variants of commonly available POSIX file system interfaces to provide the extended FTL capabilities to the user space application. The experimental results show that the hybrid approach of NVM Compression can improve compression performance by 2-7x, deliver compression performance for flash devices that is within 5\% of uncompressed performance, improve storage efficiency by 19\% over legacy Row-Compression, reduce data writes by up to 4x when combined with other flash aware techniques such as Atomic Writes, and deliver further advantages in power efficiency and CPU utilization. Various micro benchmark measurement and findings on sparse files call for required improvement in file systems for handling of punch hole operations on files.}
    }

 Open Access 

Machine Learning on Large Databases: Transforming Hidden Markov Models to SQL Statements

Dennis Marten, Andreas Heuer

Open Journal of Databases (OJDB), 4(1), Pages 22-42, 2017, Downloads: 5691, Citations: 16

Full-Text: pdf | URN: urn:nbn:de:101:1-2017100112181 | GNL-LP: 1140718215 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper.

BibTex:

    @Article{OJDB_2017v4i1n02_Marten,
        title     = {Machine Learning on Large Databases: Transforming Hidden Markov Models to SQL Statements},
        author    = {Dennis Marten and
                     Andreas Heuer},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {22--42},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017100112181},
        urn       = {urn:nbn:de:101:1-2017100112181},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper.}
    }

 Open Access 

High-Dimensional Spatio-Temporal Indexing

Mathias Menninghaus, Martin Breunig, Elke Pulvermüller

Open Journal of Databases (OJDB), 3(1), Pages 1-20, 2016, Downloads: 10050

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194635 | GNL-LP: 1132360897 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: There exist numerous indexing methods which handle either spatio-temporal or high-dimensional data well. However, those indexing methods which handle spatio-temporal data well have certain drawbacks when confronted with high-dimensional data. As the most efficient spatio-temporal indexing methods are based on the R-tree and its variants, they face the well known problems in high-dimensional space. Furthermore, most high-dimensional indexing methods try to reduce the number of dimensions in the data being indexed and compress the information given by all dimensions into few dimensions but are not able to store now - relative data. One of the most efficient high-dimensional indexing methods, the Pyramid Technique, is able to handle high-dimensional point-data only. Nonetheless, we take this technique and extend it such that it is able to handle spatio-temporal data as well. We introduce a technique for querying in this structure with spatio-temporal queries. We compare our technique, the Spatio-Temporal Pyramid Adapter (STPA), to the RST-tree for in-memory and on-disk applications. We show that for high dimensions, the extra query-cost for reducing the dimensionality in the Pyramid Technique is clearly exceeded by the rising query-cost in the RST-tree. Concluding, we address the main drawbacks and advantages of our technique.

BibTex:

    @Article{OJDB_2016v3i1n01_Menninghaus,
        title     = {High-Dimensional Spatio-Temporal Indexing},
        author    = {Mathias Menninghaus and
                     Martin Breunig and
                     Elke Pulverm{\"u}ller},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--20},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194635},
        urn       = {urn:nbn:de:101:1-201705194635},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {There exist numerous indexing methods which handle either spatio-temporal or high-dimensional data well. However, those indexing methods which handle spatio-temporal data well have certain drawbacks when confronted with high-dimensional data. As the most efficient spatio-temporal indexing methods are based on the R-tree and its variants, they face the well known problems in high-dimensional space. Furthermore, most high-dimensional indexing methods try to reduce the number of dimensions in the data being indexed and compress the information given by all dimensions into few dimensions but are not able to store now - relative data. One of the most efficient high-dimensional indexing methods, the Pyramid Technique, is able to handle high-dimensional point-data only. Nonetheless, we take this technique and extend it such that it is able to handle spatio-temporal data as well. We introduce a technique for querying in this structure with spatio-temporal queries. We compare our technique, the Spatio-Temporal Pyramid Adapter (STPA), to the RST-tree for in-memory and on-disk applications. We show that for high dimensions, the extra query-cost for reducing the dimensionality in the Pyramid Technique is clearly exceeded by the rising query-cost in the RST-tree. Concluding, we address the main drawbacks and advantages of our technique.}
    }

 Open Access 

Runtime Adaptive Hybrid Query Engine based on FPGAs

Stefan Werner, Dennis Heinrich, Sven Groppe, Christopher Blochwitz, Thilo Pionteck

Open Journal of Databases (OJDB), 3(1), Pages 21-41, 2016, Downloads: 13108, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194645 | GNL-LP: 1132360900 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge.

BibTex:

    @Article{OJDB_2016v3i1n02_Werner,
        title     = {Runtime Adaptive Hybrid Query Engine based on FPGAs},
        author    = {Stefan Werner and
                     Dennis Heinrich and
                     Sven Groppe and
                     Christopher Blochwitz and
                     Thilo Pionteck},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {21--41},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194645},
        urn       = {urn:nbn:de:101:1-201705194645},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge.	}
    }

 Open Access 

XML-based Execution Plan Format (XEP)

Christoph Koch

Open Journal of Databases (OJDB), 3(1), Pages 42-52, 2016, Downloads: 5728

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194654 | GNL-LP: 1132360919 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Execution plan analysis is one of the most common SQL tuning tasks performed by relational database administrators and developers. Currently each database management system (DBMS) provides its own execution plan format, which supports system-specific details for execution plans and contains inherent plan operators. This makes SQL tuning a challenging issue. Firstly, administrators and developers often work with more than one DBMS and thus have to rethink among different plan formats. In addition, the analysis tools of execution plans only support single DBMSs, or they have to implement separate logic to handle each specific plan format of different DBMSs. To address these problems, this paper proposes an XML-based Execution Plan format (XEP), aiming to standardize the representation of execution plans of relational DBMSs. Two approaches are developed for transforming DBMS-specific execution plans into XEP format. They have been successfully evaluated for IBM DB2, Oracle Database and Microsoft SQL.

BibTex:

    @Article{OJDB_2016v3i1n03_Koch,
        title     = {XML-based Execution Plan Format (XEP)},
        author    = {Christoph Koch},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {42--52},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194654},
        urn       = {urn:nbn:de:101:1-201705194654},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Execution plan analysis is one of the most common SQL tuning tasks performed by relational database administrators and developers. Currently each database management system (DBMS) provides its own execution plan format, which supports system-specific details for execution plans and contains inherent plan operators. This makes SQL tuning a challenging issue. Firstly, administrators and developers often work with more than one DBMS and thus have to rethink among different plan formats. In addition, the analysis tools of execution plans only support single DBMSs, or they have to implement separate logic to handle each specific plan format of different DBMSs. To address these problems, this paper proposes an XML-based Execution Plan format (XEP), aiming to standardize the representation of execution plans of relational DBMSs. Two approaches are developed for transforming DBMS-specific execution plans into XEP format. They have been successfully evaluated for IBM DB2, Oracle Database and Microsoft SQL.}
    }

 Open Access 

Deriving Bounds on the Size of Spatial Areas

Erik Buchmann, Patrick Erik Bradley, Klemens Böhm

Open Journal of Databases (OJDB), 2(1), Pages 1-16, 2015, Downloads: 11855

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194566 | GNL-LP: 113236082X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Many application domains such as surveillance, environmental monitoring or sensor-data processing need upper and lower bounds on areas that are covered by a certain feature. For example, a smart-city infrastructure might need bounds on the size of an area polluted with fine-dust, to re-route combustion-engine traffic. Obtaining such bounds is challenging, because in almost any real-world application, information about the region of interest is incomplete, e.g., the database of sensor data contains only a limited number of samples. Existing approaches cannot provide upper and lower bounds or depend on restrictive assumptions, e.g., the area must be convex. Our approach in turn is based on the natural assumption that it is possible to specify a minimal diameter for the feature in question. Given this assumption, we formally derive bounds on the area size, and we provide algorithms that compute these bounds from a database of sensor data, based on geometrical considerations. We evaluate our algorithms both with a real-world case study and with synthetic data.

BibTex:

    @Article{OJDB-2015v2i1n01_Buchmann,
        title     = {Deriving Bounds on the Size of Spatial Areas},
        author    = {Erik Buchmann and
                     Patrick Erik Bradley and
                     Klemens B{\"o}hm},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {1--16},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194566},
        urn       = {urn:nbn:de:101:1-201705194566},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Many application domains such as surveillance, environmental monitoring or sensor-data processing need upper and lower bounds on areas that are covered by a certain feature. For example, a smart-city infrastructure might need bounds on the size of an area polluted with fine-dust, to re-route combustion-engine traffic. Obtaining such bounds is challenging, because in almost any real-world application, information about the region of interest is incomplete, e.g., the database of sensor data contains only a limited number of samples. Existing approaches cannot provide upper and lower bounds or depend on restrictive assumptions, e.g., the area must be convex. Our approach in turn is based on the natural assumption that it is possible to specify a minimal diameter for the feature in question. Given this assumption, we formally derive bounds on the area size, and we provide algorithms that compute these bounds from a database of sensor data, based on geometrical considerations. We evaluate our algorithms both with a real-world case study and with synthetic data.}
    }

 Open Access 

Causal Consistent Databases

Mawahib Musa Elbushra, Jan Lindström

Open Journal of Databases (OJDB), 2(1), Pages 17-35, 2015, Downloads: 15204, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194619 | GNL-LP: 1132360870 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Many consistency criteria have been considered in databases and the causal consistency is one of them. The causal consistency model has gained much attention in recent years because it provides ordering of relative operations. The causal consistency requires that all writes, which are potentially causally related, must be seen in the same order by all processes. The causal consistency is a weaker criteria than the sequential consistency, because there exists an execution, which is causally consistent but not sequentially consistent, however all executions satisfying the sequential consistency are also causally consistent. Furthermore, the causal consistency supports non-blocking operations; i.e. processes may complete read or write operations without waiting for global computation. Therefore, the causal consistency overcomes the primary limit of stronger criteria: communication latency. Additionally, several application semantics are precisely captured by the causal consistency, e.g. collaborative tools. In this paper, we review the state-of-the-art of causal consistent databases, discuss the features, functionalities and applications of the causal consistency model, and systematically compare it with other consistency models. We also discuss the implementation of causal consistency databases and identify limitations of the causal consistency model.

BibTex:

    @Article{OJDB_2015v2i1n02_Elbushra,
        title     = {Causal Consistent Databases},
        author    = {Mawahib Musa Elbushra and
                     Jan Lindstr{\"o}m},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {17--35},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194619},
        urn       = {urn:nbn:de:101:1-201705194619},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Many consistency criteria have been considered in databases and the causal consistency is one of them. The causal consistency model has gained much attention in recent years because it provides ordering of relative operations. The causal consistency requires that all writes, which are potentially causally related, must be seen in the same order by all processes. The causal consistency is a weaker criteria than the sequential consistency, because there exists an execution, which is causally consistent but not sequentially consistent, however all executions satisfying the sequential consistency are also causally consistent. Furthermore, the causal consistency supports non-blocking operations; i.e. processes may complete read or write operations without waiting for global computation. Therefore, the causal consistency overcomes the primary limit of stronger criteria: communication latency. Additionally, several application semantics are precisely captured by the causal consistency, e.g. collaborative tools. In this paper, we review the state-of-the-art of causal consistent databases, discuss the features, functionalities and applications of the causal consistency model, and systematically compare it with other consistency models. We also discuss the implementation of causal consistency databases and identify limitations of the causal consistency model.}
    }

 Open Access 

PatTrieSort - External String Sorting based on Patricia Tries

Sven Groppe, Dennis Heinrich, Stefan Werner, Christopher Blochwitz, Thilo Pionteck

Open Journal of Databases (OJDB), 2(1), Pages 36-50, 2015, Downloads: 13072, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194627 | GNL-LP: 1132360889 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Presentation: Video

Abstract: External merge sort belongs to the most efficient and widely used algorithms to sort big data: As much data as fits inside is sorted in main memory and afterwards swapped to external storage as so called initial run. After sorting all the data in this way block-wise, the initial runs are merged in a merging phase in order to retrieve the final sorted run containing the completely sorted original data. Patricia tries are one of the most space-efficient ways to store strings especially those with common prefixes. Hence, we propose to use patricia tries for initial run generation in an external merge sort variant, such that initial runs can become large compared to traditional external merge sort using the same main memory size. Furthermore, we store the initial runs as patricia tries instead of lists of sorted strings. As we will show in this paper, patricia tries can be efficiently merged having a superior performance in comparison to merging runs of sorted strings. We complete our discussion with a complexity analysis as well as a comprehensive performance evaluation, where our new approach outperforms traditional external merge sort by a factor of 4 for sorting over 4 billion strings of real world data.

BibTex:

    @Article{OJDB_2015v2i1n03_Groppe,
        title     = {PatTrieSort - External String Sorting based on Patricia Tries},
        author    = {Sven Groppe and
                     Dennis Heinrich and
                     Stefan Werner and
                     Christopher Blochwitz and
                     Thilo Pionteck},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {36--50},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194627},
        urn       = {urn:nbn:de:101:1-201705194627},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {External merge sort belongs to the most efficient and widely used algorithms to sort big data: As much data as fits inside is sorted in main memory and afterwards swapped to external storage as so called initial run. After sorting all the data in this way block-wise, the initial runs are merged in a merging phase in order to retrieve the final sorted run containing the completely sorted original data. Patricia tries are one of the most space-efficient ways to store strings especially those with common prefixes. Hence, we propose to use patricia tries for initial run generation in an external merge sort variant, such that initial runs can become large compared to traditional external merge sort using the same main memory size. Furthermore, we store the initial runs as patricia tries instead of lists of sorted strings. As we will show in this paper, patricia tries can be efficiently merged having a superior performance in comparison to merging runs of sorted strings. We complete our discussion with a complexity analysis as well as a comprehensive performance evaluation, where our new approach outperforms traditional external merge sort by a factor of 4 for sorting over 4 billion strings of real world data.}
    }

 Open Access 

Using Business Intelligence to Improve DBA Productivity

Eric A. Mortensen, En Cheng

Open Journal of Databases (OJDB), 1(2), Pages 1-16, 2014, Downloads: 11426

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194595 | GNL-LP: 1132360854 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The amount of data collected and used by companies has grown rapidly in size over the last decade. Business leaders are now using Business Intelligence (BI) systems to make effective business decisions against large amounts of data. The growth in the size of data has been a major challenge for Database Administrators (DBAs). The increase in the number and size of databases at the speed they have grown has made it difficult for DBA teams to provide the same level of service that the business requires they provide. The methods that DBAs have used in the last several decades can no longer be performed with the efficiency needed over all of the databases they administer. This paper presents the first BI system to improve DBA productivity and providing important data metrics for Information Technology (IT) managers. The BI system has been well received by Sherwin Williams Database Administrators. It has i) enabled the DBA team to quickly determine which databases needed work by a DBA without manually logging into the system; ii) helped the DBA team and its management to easily answer other business users' questions without using DBAs' time to research the issue; and iii) helped the DBA team to provide the business data for unanticipated audit request.

BibTex:

    @Article{OJDB-v1i2n01_Mortensen,
        title     = {Using Business Intelligence to Improve DBA Productivity},
        author    = {Eric A. Mortensen and
                     En Cheng},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {1--16},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194595},
        urn       = {urn:nbn:de:101:1-201705194595},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The amount of data collected and used by companies has grown rapidly in size over the last decade.  Business leaders are now using Business Intelligence (BI) systems to make effective business decisions against large amounts of data. The growth in the size of data has been a major challenge for Database Administrators (DBAs). The increase in the number and size of databases at the speed they have grown has made it difficult for DBA teams to provide the same level of service that the business requires they provide. The methods that DBAs have used in the last several decades can no longer be performed with the efficiency needed over all of the databases they administer. This paper presents the first BI system to improve DBA productivity and providing important data metrics for Information Technology (IT) managers. The BI system has been well received by Sherwin Williams Database Administrators.  It has i) enabled the DBA team to quickly determine which databases needed work by a DBA without manually logging into the system; ii) helped the DBA team and its management to easily answer other business users' questions without using DBAs' time to research the issue; and iii) helped the DBA team to provide the business data for unanticipated audit request.}
    }

 Open Access 

Which NoSQL Database? A Performance Overview

Veronika Abramova, Jorge Bernardino, Pedro Furtado

Open Journal of Databases (OJDB), 1(2), Pages 17-24, 2014, Downloads: 28608, Citations: 89

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194607 | GNL-LP: 1132360862 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs.

BibTex:

    @Article{OJDB-v1i2n02_Abramova,
        title     = {Which NoSQL Database? A Performance Overview},
        author    = {Veronika Abramova and
                     Jorge Bernardino and
                     Pedro Furtado},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {17--24},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194607},
        urn       = {urn:nbn:de:101:1-201705194607},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs.}
    }

 Open Access 

Introductory Editorial

Fabio Grandi

Open Journal of Databases (OJDB), 1(1), Pages 1-2, 2014, Downloads: 5098

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194557 | GNL-LP: 113236079X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The Open Journal of Databases (OJDB) is a new open access journal covering all aspects of database research and technology. In this editorial, the first issue of the journal is presented.

BibTex:

    @Article{OJDB-v1i1n01_Grandi,
        title     = {Introductory Editorial},
        author    = {Fabio Grandi},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {1--2},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194557},
        urn       = {urn:nbn:de:101:1-201705194557},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The Open Journal of Databases (OJDB) is a new open access journal covering all aspects of database research and technology. In this editorial, the first issue of the journal is presented.}
    }

 Open Access 

Designing a Benchmark for the Assessment of Schema Matching Tools

Fabien Duchateau, Zohra Bellahsene

Open Journal of Databases (OJDB), 1(1), Pages 3-25, 2014, Downloads: 10187, Citations: 13

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194573 | GNL-LP: 1132360838 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Over the years, many schema matching approaches have been developed to discover correspondences between schemas. Although this task is crucial in data integration, its evaluation, both in terms of matching quality and time performance, is still manually performed. Indeed, there is no common platform which gathers a collection of schema matching datasets to fulfil this goal. Another problem deals with the measuring of the post-match effort, a human cost that schema matching approaches aim at reducing. Consequently, we propose XBenchMatch, a schema matching benchmark with available datasets and new measures to evaluate this manual post-match effort and the quality of integrated schemas. We finally report the results obtained by different approaches, namely COMA++, Similarity Flooding and YAM. We show that such a benchmark is required to understand the advantages and failures of schema matching approaches. Therefore, it could help an end-user to select a schema matching tool which covers his/her needs.

BibTex:

    @Article{OJDB-v1i1n02_Duchateau,
        title     = {Designing a Benchmark for the Assessment of Schema Matching Tools},
        author    = {Fabien Duchateau and
                     Zohra Bellahsene},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {3--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194573},
        urn       = {urn:nbn:de:101:1-201705194573},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Over the years, many schema matching approaches have been developed to discover correspondences between schemas. Although this task is crucial in data integration, its evaluation, both in terms of matching quality and time performance, is still manually performed. Indeed, there is no common platform which gathers a collection of schema matching datasets to fulfil this goal. Another problem deals with the measuring of the post-match effort, a human cost that schema matching approaches aim at reducing. Consequently, we propose XBenchMatch, a schema matching benchmark with available datasets and new measures to evaluate this manual post-match effort and the quality of integrated schemas. We finally report the results obtained by different approaches, namely COMA++, Similarity Flooding and YAM. We show that such a benchmark is required to understand the advantages and failures of schema matching approaches. Therefore, it could help an end-user to select a schema matching tool which covers his/her needs.}
    }

 Open Access 

Eventual Consistent Databases: State of the Art

Mawahib Musa Elbushra, Jan Lindström

Open Journal of Databases (OJDB), 1(1), Pages 26-41, 2014, Downloads: 19289, Citations: 15

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194582 | GNL-LP: 1132360846 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: One of the challenges of cloud programming is to achieve the right balance between the availability and consistency in a distributed database. Cloud computing environments, particularly cloud databases, are rapidly increasing in importance, acceptance and usage in major applications, which need the partition-tolerance and availability for scalability purposes, but sacrifice the consistency side (CAP theorem). In these environments, the data accessed by users is stored in a highly available storage system, thus the use of paradigms such as eventual consistency became more widespread. In this paper, we review the state-of-the-art database systems using eventual consistency from both industry and research. Based on this review, we discuss the advantages and disadvantages of eventual consistency, and identify the future research challenges on the databases using eventual consistency.

BibTex:

    @Article{OJDB-v1i1n03_Elbushra,
        title     = {Eventual Consistent Databases: State of the Art},
        author    = {Mawahib Musa Elbushra and
                     Jan Lindstr{\"o}m},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {26--41},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194582},
        urn       = {urn:nbn:de:101:1-201705194582},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {One of the challenges of cloud programming is to achieve the right balance between the availability and consistency in a distributed database. Cloud computing environments, particularly cloud databases, are rapidly increasing in importance, acceptance and usage in major applications, which need the partition-tolerance and availability for scalability purposes, but sacrifice the consistency side (CAP theorem). In these environments, the data accessed by users is stored in a highly available storage system, thus the use of paradigms such as eventual consistency became more widespread. In this paper, we review the state-of-the-art database systems using eventual consistency from both industry and research. Based on this review, we discuss the advantages and disadvantages of eventual consistency, and identify the future research challenges on the databases using eventual consistency.}
    }

OJDB Publication Fees

All articles published by RonPub are fully open access and online available to readers free of charge. To be able to provide open access journals, RonPub defrays the costs (induced by processing and editing of manuscripts, provision and maintenance of infrastructure, and routine operation and management of journals) by charging an one-time publication fee for each accepted article. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors from low-income countries. Authors who do not have funds to cover publication fees should submit an application during the submission process. Applications of waiver will be examined on a case by case basis. The scientific committee members of RonPub are entitled a partial waiver of the standard publication fees as reward for their work. 

  • Standard publication fee: 338 Euro (excluding tax).
  • Authors from the low-income countries: 71% waiver of the standard publication fee. (Note: The list is subject to change based on the data of the World Bank Group.):
    Afghanistan, Bangladesh, Benin, Bhutan, Bolivia (Plurinational State of), Burkina Faso, Burundi, Cambodia, Cameroon, Central African Republic, Chad, Comoros, Congo (Democratic Republic), Côte d'Ivoire, Djibouti, Eritrea, Ethiopia, Gambia, Ghana, Guinea, Guinea-Bissau, Haiti, Honduras, Kenya, Kiribati, Korea (Democratic People’s Republic), Kosovo, Kyrgyz Republic, Lao (People’s Democratic Republic), Lesotho, Liberia, Madagascar, Malawi, Mali, Mauritania, Micronesia (Federated States of), Moldova, Morocco, Mozambique, Myanmar, Nepal, Nicaragua, Niger, Nigeria, Papua New Guinea, Rwanda, Senegal, Sierra Leone, Solomon Islands, Somalia, South Sudan, Sudan, Swaziland, Syrian Arab Republic, São Tomé and Principe, Tajikistan, Tanzania, Timor-Leste, Togo, Uganda, Uzbekistan, Vietnam, West Bank and Gaza Strip, Yemen (Republic), Zambia, Zimbabwe
  • Scientific committee members: 25% waiver of the standard publication fee.
  • Guest editors and reviewers: 25% waiver of the standard publication fee for one year.

Payments are subject to tax. A German VAT (value-added tax) at 19% will be charged if applicable. US and Canadian customers need to provide their sales tax number and their certificate of incorporation to be exempt from the VAT charge; European Union customers (not German customers) need to provide their VAT to be exempt from the VAT charge. Customers from Germany and other countries will be charged with the VAT charge. Individuals are not eligible for tax exempt status.

Editors and reviewers have no access to payment information. The inability to pay will not influence the decision to publish a paper; decisions to publish are only based on the quality of work and the editorial criteria.

OJDB Indexing

In order for our publications getting widely abstracted, indexed and cited, the following methods are employed:

  • Various meta tags are embedded in each publication webpage, including Google Scholar Tags, Dublic Core, EPrints, BE Press and Prism. This enables crawlers of e.g. Google Scholar to discover and index our publications.
  • Different metadata export formats are provided for each article, including BibTex, XML, RSS and RDF. This makes readers to cite our papers easily.
  • An OAI-PMH interface is implemented, which facilitates our article metadata harvesting by indexing services and databases.

The paper Getting Indexed by Bibliographic Databases in the Area of Computer Science provides a comprehensive survey on indexing formats, techniques and databases. We will also continue our efforts on dissemination and indexing of our publications.

OJDB has been indexed by the following libraries and bibliographic databases:

Submission to Open Journal of Databases (OJDB)

Please submit your manuscript by carefully filling in the information in the following web form. If there technical problems, you may also submit your manuscript by sending the information and the manuscript to .

Submission to Regular or Special Issue

Please specify if the paper is submitted to a regular issue or one of the special issues:

Type of Paper

Please specify the type of your paper here. Please check Aims & Scope if you are not sure of which type your paper is.





Title

Please specify the title of your paper here:

Abstract

Please copy & paste the abstract of your paper here:

Authors

Please provide necessary information about the authors of your submission here. Please mark the contact authors, which will be contacted for the main correspondence.

Author 1:


Name:
EMail:
Affiliation:
Webpage (optional):

Author 2:


Name:
EMail:
Affiliation:
Webpage (optional):

Author 3:


Name:
EMail:
Affiliation:
Webpage (optional):

Add Author

Conflicts of Interest

Please specify any conflicts of interests here. Conflicts of interest occur e.g. if the author and the editor are colleagues, work or worked closely together, or are relatives.

Suggestion of Editors (Optional)

You can suggest editors (with scientific background of the topics addressed in your submission) for handling your submission. The Editor-in-Chief may consider your suggestion, but may also choose another editor.

Suggestion of Reviewers (Optional)

You can suggest reviewers (with scientific background of the topics addressed in your submission) for handling your submission. The editor of your submission may consider your suggestion, but may also choose other or additional reviewers in order to guarantee an independent review process.

Reviewer 1:

Name:
EMail:
Affiliation:
Webpage (optional):

Reviewer 2:

Name:
EMail:
Affiliation:
Webpage (optional):

Reviewer 3:

Name:
EMail:
Affiliation:
Webpage (optional):

Add Reviewer

Paper upload

Please choose your manuscript file for uploading. It should be a pdf file. Please take care that your manuscript is formatted according to the templates provided by RonPub, which are available at our Author Guidelines page. Manuscripts not formatted according to our RonPub templates will be rejected without review!

If you wish that the reviewer are not aware of your name, please submit a blinded manuscript leaving out identifiable information like authors' names and affiliations.

Choose PDF file...

Chosen PDF file: none

Captcha

Please fill in the characters of the image into the text field under the image.

Captcha

Submission

For Authors

Manuscript Preparation

Authors should first read the author guidelines of the corresponding journal. Manuscripts must be prepared using the manuscript template of the respective journal. It is available as word and latex version for download at the Author Guidelines of the corresponding journal page. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts via the submit page of the corresponding journal. Authors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as PDF file and word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts within a few days. Subsequent enquiries concerning paper progress should be made to the corresponding editorial office (see individual journal webpage for concrete contact information).

Review Procedure

RonPub is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in RonPub journals are strictly and thoroughly peer-reviewed. When a manuscript is submitted to a RonPub journal, the editor-in-chief of the journal assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A new round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

For Editors

About RonPub

RonPub is academic publisher of online, open access, peer-reviewed journals. All articles published by RonPub is fully open access and online available to readers free of charge.

RonPub is located in Lübeck, Germany. Lübeck is a beautiful harbour city, 60 kilometer away from Hamburg.

Editor-in-Chief Responsibilities

The Editor-in-Chief of each journal is mainly responsible for the scientific quality of the journal and for assisting in the management of the journal. The Editor-in-Chief suggests topics for the journal, invites distinguished scientists to join the editorial board, oversees the editorial process, and makes the final decision whether a paper can be published after peer-review and revisions.

As a reward for the work of a Editor-in-Chief, the Editor-in-Chief will obtain a 25% discount of the standard publication fee for her/his papers (the Editor-in-Chief is one of authors) published in any of RonPub journals.

Editors’ Responsibilities

Editors assist the Editor-in-Chief in the scientific quality and in decision about topics of the journal. Editors are also encouraged to help to promote the journal among their peers and at conferences. An editor invites at least three reviewers to review a manuscript, but may also review him-/herself the manuscript. After carefully evaluating the review reports and the manuscript itself, the editor makes a commendation about the status of the manuscript. The editor's evaluation as well as the review reports are then sent to EiC, who make the final decision whether a paper can be published after peer-review and revisions. 

The communication with Editorial Board members is done primarily by E-mail, and the Editors are expected to respond within a few working days on any question sent by the Editorial Office so that manuscripts can be processed in a timely fashion. If an editor does not respond or cannot process the work in time, and under some special situations, the editorial office may forward the requests to the Publishers or Editor-in-Chief, who will take the decision directly.

As a reward for the work of editors, an editor will obtain a 25% discount of the standard publication fee for her/his papers (the editor is one of authors) published in any of RonPub journals.

Guest Editors’ Responsibilities

Guest Editors are responsible of the scientific quality of their special issues. Guest Editors will be in charge of inviting papers, of supervising the refereeing process (each paper should be reviewed at least by three reviewers), and of making decisions on the acceptance of manuscripts submitted to their special issue. As regular issues, all accepted papers by (guest) editors will be sent to the EiC of the journal, who will check the quality of the papers, and make the final decsion whether a paper can be published.

Our editorial office will have the right directly asking authors to revise their paper if there are quality issues, e.g. weak quality of writing, and missing information. Authors are required to revise their paper several times if necessary. A paper accepted by it's quest editor may be rejected by the EiC of the journal due to a low quality. However, this occurs only when authors do not really take efforts to revise their paper. A high-quality publication needs the common efforts from the journal, reviewers, editors, editor-in-chief and authors.

The Guest Editors are also expected to write an editorial paper for the special issue. As a reward for work, all guest editors and reviewers working on a special issue will obtain a 25% discount of the standard publication fee for any of their papers published in any of RonPub journals for one year.

Reviewers’ Responsiblity

A reviewer is mainly responsible for reviewing of manuscripts, writing reviewing report and suggesting acception or deny of manuscripts. Reviews are encouraged to provide input about the quality and management of the journal, and help promote the journal among their peers and at conferences.  

Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member. 

As a reward for the reviewing work, a reviewer will obtain a 25% discount of the standard publication fee for her/his papers (the review is one of authors) published in any of RonPub journals.

Launching New Journals

RonPub always welcomes suggestions for new open access journals in any research area. We are also open for publishing collaborations with research societies. Please send your proposals for new journals or for publishing collaboration to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Publication Criteria

This part provides important information for both the scientific committees and authors.

Ethic Requirement:

For scientific committees: Each editor and reviewer should conduct the evaluation of manuscripts objectively and fairly.
For authors: Authors should present their work honestly without fabrication, falsification, plagiarism or inappropriate data manipulation.

Pre-Check:

In order to filter fabricated submissions, the editorial office will check the authenticity of the authors and their affiliations before a peer-review begins. It is important that the authors communicate with us using the email addresses of their affiliations and provide us the URL addresses of their affiliations. To verify the originality of submissions, we use various plagiarism detection tools to check the content of manuscripts submitted to our journal against existing publications. The overall quality of paper will be also checked including format, figures, tables, integrity and adequacy. Authors may be required to improve the quality of their paper before sending it out for review. If a paper is obviously of low quality, the paper will be directly rejected.

Acceptance Criteria:

The criteria for acceptance of manuscripts are the quality of work. This will concretely be reflected in the following aspects:

  • Novelty and Practical Impact
  • Technical Soundness
  • Appropriateness and Adequacy of 
    • Literature Review
    • Background Discussion
    • Analysis of Issues
  • Presentation, including 
    • Overall Organization 
    • English 
    • Readability

For a contribution to be acceptable for publication, these points should be at least in middle level.

Guidelines for Rejection:

  • If the work described in the manuscript has been published, or is under consideration for publication anywhere else, it will not be evaluated.
  • If the work is a plagiarism, or contains data falsification or fabrication, it will be rejected.
  • Manuscripts, which have seriously technical flaws, will not be accepted.

Call for Journals

Research Online Publishing (RonPub, www.ronpub.com) is a publisher of online, open access and peer-reviewed scientific journals.  For more information about RonPub please visit this link.

RonPub always welcomes suggestions for new journals in any research area. Please send your proposals for journals along with your Curriculum Vitae to This email address is being protected from spambots. You need JavaScript enabled to view it. .

We are also open for publishing collaborations with research societies. Please send your publishing collaboration also to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Be an Editor / Be a Reviewer

RonPub always welcomes qualified academicians and practitioners to join as editors and reviewers. Being an editor/a reviewer is a matter of prestige and personnel achievement. Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member.

If you would like to participate as a scientific committee member of any of RonPub journals, please send an email to This email address is being protected from spambots. You need JavaScript enabled to view it. with your curriculum vitae. We will revert back as soon as possible. For more information about editors/reviewers, please visit this link.

Contact RonPub

Location

RonPub UG (haftungsbeschränkt)
Hiddenseering 30
23560 Lübeck
Germany

Comments and Questions

For general inquiries, please e-mail to This email address is being protected from spambots. You need JavaScript enabled to view it. .

For specific questions on a certain journal, please visit the corresponding journal page to see the email address.