RonPub

Loading...

RonPub Banner

RonPub -- Research Online Publishing

RonPub (Research online Publishing) is an academic publisher of online, open access, peer-reviewed journals.  RonPub aims to provide a platform for researchers, developers, educators, and technical managers to share and exchange their research results worldwide.

RonPub Is Open Access:

RonPub publishes all of its journals under the open access model, defined under BudapestBerlin, and Bethesda open access declarations:

  • All articles published by RonPub is fully open access and online available to readers free of charge.  
  • All open access articles are distributed under  Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction free of charge in any medium, provided that the original work is properly cited. 
  • Authors retain all copyright to their work.
  • Authors may also publish the publisher's version of their paper on any repository or website. 

RonPub Is Cost-Effective:

To be able to provide open access journals, RonPub defray publishing cost by charging a one-time publication fee for each accepted article. One of RonPub objectives is providing a fast and high-quality but lower-cost publishing service. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors who do not have funds to cover publication fees. We also offer a partial fee waiver for editors and reviewers of RonPub as as reward for their work. See the respective Journal webpage for the concrete publication fee.

RonPub Publication Criteria

What we are most concerned about is the quality, not quantity, of publications. We only publish high-quality scholarly papers. Publication Criteria describes the criteria that should be met for a contribution to be acceptable for publication in RonPub journals.

RonPub Publication Ethics Statement:

In order to ensure the publishing quality and the reputation of the publisher, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

RonPub follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Long-Term Preservation in the German National Library

Our publications are archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete.

Where is RonPub?

RonPub is a registered corporation in Lübeck, Germany. Lübeck is a beautiful coastal city, owing wonderful sea resorts and sandy beaches as well as good restaurants. It is located in northern Germany and is 60 kilometer away from Hamburg.

For Authors

Manuscript Preparation

Authors should first read the author guidelines of the corresponding journal. Manuscripts must be prepared using the manuscript template of the respective journal. It is available as word and latex version for download at the Author Guidelines of the corresponding journal page. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts via the submit page of the corresponding journal. Authors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as PDF file and word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts within a few days. Subsequent enquiries concerning paper progress should be made to the corresponding editorial office (see individual journal webpage for concrete contact information).

Review Procedure

RonPub is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in RonPub journals are strictly and thoroughly peer-reviewed. When a manuscript is submitted to a RonPub journal, the editor-in-chief of the journal assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A new round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

For Editors

About RonPub

RonPub is academic publisher of online, open access, peer-reviewed journals. All articles published by RonPub is fully open access and online available to readers free of charge.

RonPub is located in Lübeck, Germany. Lübeck is a beautiful harbour city, 60 kilometer away from Hamburg.

Editor-in-Chief Responsibilities

The Editor-in-Chief of each journal is mainly responsible for the scientific quality of the journal and for assisting in the management of the journal. The Editor-in-Chief suggests topics for the journal, invites distinguished scientists to join the editorial board, oversees the editorial process, and makes the final decision whether a paper can be published after peer-review and revisions.

As a reward for the work of a Editor-in-Chief, the Editor-in-Chief will obtain a 25% discount of the standard publication fee for her/his papers (the Editor-in-Chief is one of authors) published in any of RonPub journals.

Editors’ Responsibilities

Editors assist the Editor-in-Chief in the scientific quality and in decision about topics of the journal. Editors are also encouraged to help to promote the journal among their peers and at conferences. An editor invites at least three reviewers to review a manuscript, but may also review him-/herself the manuscript. After carefully evaluating the review reports and the manuscript itself, the editor makes a commendation about the status of the manuscript. The editor's evaluation as well as the review reports are then sent to EiC, who make the final decision whether a paper can be published after peer-review and revisions. 

The communication with Editorial Board members is done primarily by E-mail, and the Editors are expected to respond within a few working days on any question sent by the Editorial Office so that manuscripts can be processed in a timely fashion. If an editor does not respond or cannot process the work in time, and under some special situations, the editorial office may forward the requests to the Publishers or Editor-in-Chief, who will take the decision directly.

As a reward for the work of editors, an editor will obtain a 25% discount of the standard publication fee for her/his papers (the editor is one of authors) published in any of RonPub journals.

Guest Editors’ Responsibilities

Guest Editors are responsible of the scientific quality of their special issues. Guest Editors will be in charge of inviting papers, of supervising the refereeing process (each paper should be reviewed at least by three reviewers), and of making decisions on the acceptance of manuscripts submitted to their special issue. As regular issues, all accepted papers by (guest) editors will be sent to the EiC of the journal, who will check the quality of the papers, and make the final decsion whether a paper can be published.

Our editorial office will have the right directly asking authors to revise their paper if there are quality issues, e.g. weak quality of writing, and missing information. Authors are required to revise their paper several times if necessary. A paper accepted by it's quest editor may be rejected by the EiC of the journal due to a low quality. However, this occurs only when authors do not really take efforts to revise their paper. A high-quality publication needs the common efforts from the journal, reviewers, editors, editor-in-chief and authors.

The Guest Editors are also expected to write an editorial paper for the special issue. As a reward for work, all guest editors and reviewers working on a special issue will obtain a 25% discount of the standard publication fee for any of their papers published in any of RonPub journals for one year.

Reviewers’ Responsiblity

A reviewer is mainly responsible for reviewing of manuscripts, writing reviewing report and suggesting acception or deny of manuscripts. Reviews are encouraged to provide input about the quality and management of the journal, and help promote the journal among their peers and at conferences.  

Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member. 

As a reward for the reviewing work, a reviewer will obtain a 25% discount of the standard publication fee for her/his papers (the review is one of authors) published in any of RonPub journals.

Launching New Journals

RonPub always welcomes suggestions for new open access journals in any research area. We are also open for publishing collaborations with research societies. Please send your proposals for new journals or for publishing collaboration to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Publication Criteria

This part provides important information for both the scientific committees and authors.

Ethic Requirement:

For scientific committees: Each editor and reviewer should conduct the evaluation of manuscripts objectively and fairly.
For authors: Authors should present their work honestly without fabrication, falsification, plagiarism or inappropriate data manipulation.

Pre-Check:

In order to filter fabricated submissions, the editorial office will check the authenticity of the authors and their affiliations before a peer-review begins. It is important that the authors communicate with us using the email addresses of their affiliations and provide us the URL addresses of their affiliations. To verify the originality of submissions, we use various plagiarism detection tools to check the content of manuscripts submitted to our journal against existing publications. The overall quality of paper will be also checked including format, figures, tables, integrity and adequacy. Authors may be required to improve the quality of their paper before sending it out for review. If a paper is obviously of low quality, the paper will be directly rejected.

Acceptance Criteria:

The criteria for acceptance of manuscripts are the quality of work. This will concretely be reflected in the following aspects:

  • Novelty and Practical Impact
  • Technical Soundness
  • Appropriateness and Adequacy of 
    • Literature Review
    • Background Discussion
    • Analysis of Issues
  • Presentation, including 
    • Overall Organization 
    • English 
    • Readability

For a contribution to be acceptable for publication, these points should be at least in middle level.

Guidelines for Rejection:

  • If the work described in the manuscript has been published, or is under consideration for publication anywhere else, it will not be evaluated.
  • If the work is a plagiarism, or contains data falsification or fabrication, it will be rejected.
  • Manuscripts, which have seriously technical flaws, will not be accepted.

Call for Journals

Research Online Publishing (RonPub, www.ronpub.com) is a publisher of online, open access and peer-reviewed scientific journals.  For more information about RonPub please visit this link.

RonPub always welcomes suggestions for new journals in any research area. Please send your proposals for journals along with your Curriculum Vitae to This email address is being protected from spambots. You need JavaScript enabled to view it. .

We are also open for publishing collaborations with research societies. Please send your publishing collaboration also to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Be an Editor / Be a Reviewer

RonPub always welcomes qualified academicians and practitioners to join as editors and reviewers. Being an editor/a reviewer is a matter of prestige and personnel achievement. Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member.

If you would like to participate as a scientific committee member of any of RonPub journals, please send an email to This email address is being protected from spambots. You need JavaScript enabled to view it. with your curriculum vitae. We will revert back as soon as possible. For more information about editors/reviewers, please visit this link.

Contact RonPub

Location

RonPub UG (haftungsbeschränkt)
Hiddenseering 30
23560 Lübeck
Germany

Comments and Questions

For general inquiries, please e-mail to This email address is being protected from spambots. You need JavaScript enabled to view it. .

For specific questions on a certain journal, please visit the corresponding journal page to see the email address.

  1.  Open Access 

    Which NoSQL Database? A Performance Overview

    Veronika Abramova, Jorge Bernardino, Pedro Furtado

    Open Journal of Databases (OJDB), 1(2), Pages 17-24, 2014, Downloads: 28752, Citations: 89

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194607 | GNL-LP: 1132360862 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs.

    BibTex:

        @Article{OJDB-v1i2n02_Abramova,
            title     = {Which NoSQL Database? A Performance Overview},
            author    = {Veronika Abramova and
                         Jorge Bernardino and
                         Pedro Furtado},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {17--24},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194607},
            urn       = {urn:nbn:de:101:1-201705194607},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs.}
        }
    
  2.  Open Access 

    A Comparative Evaluation of Current HTML5 Web Video Implementations

    Martin Hoernig, Andreas Bigontina, Bernd Radig

    Open Journal of Web Technologies (OJWT), 1(2), Pages 1-9, 2014, Downloads: 27347, Citations: 3

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705291328 | GNL-LP: 1133021514 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: HTML5 video is the upcoming standard for playing videos on the World Wide Web. Although its specification has not been fully adopted yet, all major browsers provide the HTML5 video element and web developers already rely on its functionality. But there are differences between implementations and inaccuracies that trouble the web developer community. To help to improve the current situation we draw a comparison between the most important web browsers. We focus on the event mechanism, since it is essential for interacting with the video element. Furthermore, we compare the seeking accuracy, which is relevant for more specialized applications. Our tests reveal varieties of differences between browser interfaces and show that even simple software solutions may still need third-party plugins in today's browsers.

    BibTex:

        @Article{OJWT-v1i2n01_Hoernig,
            title     = {A Comparative Evaluation of Current HTML5 Web Video Implementations},
            author    = {Martin Hoernig and
                         Andreas Bigontina and
                         Bernd Radig},
            journal   = {Open Journal of Web Technologies (OJWT)},
            issn      = {2199-188X},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {1--9},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291328},
            urn       = {urn:nbn:de:101:1-201705291328},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {HTML5 video is the upcoming standard for playing videos on the World Wide Web. Although its specification has not been fully adopted yet, all major browsers provide the HTML5 video element and web developers already rely on its functionality. But there are differences between implementations and inaccuracies that trouble the web developer community. To help to improve the current situation we draw a comparison between the most important web browsers. We focus on the event mechanism, since it is essential for interacting with the video element. Furthermore, we compare the seeking accuracy, which is relevant for more specialized applications. Our tests reveal varieties of differences between browser interfaces and show that even simple software solutions may still need third-party plugins in today's browsers.}
        }
    
  3.  Open Access 

    Accurate Distance Estimation between Things: A Self-correcting Approach

    Ho-sik Cho, Jianxun Ji, Zili Chen, Hyuncheol Park, Wonsuk Lee

    Open Journal of Internet Of Things (OJIOT), 1(2), Pages 19-27, 2015, Downloads: 26099, Citations: 15

    Full-Text: pdf | URN: urn:nbn:de:101:1-201704244959 | GNL-LP: 1130622525 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper suggests a method to measure the physical distance between an IoT device (a Thing) and a mobile device (also a Thing) using BLE (Bluetooth Low-Energy profile) interfaces with smaller distance errors. BLE is a well-known technology for the low-power connectivity and suitable for IoT devices as well as for the proximity with the range of several meters. Apple has already adopted the technique and enhanced it to provide subdivided proximity range levels. However, as it is also a variation of RSS-based distance estimation, Apple's iBeacon could only provide immediate, near or far status but not a real and accurate distance. To provide more accurate distance using BLE, this paper introduces additional self-correcting beacon to calibrate the reference distance and mitigate errors from environmental factors. By adopting self-correcting beacon for measuring the distance, the average distance error shows less than 10% within the range of 1.5 meters. Some considerations are presented to extend the range to be able to get more accurate distances.

    BibTex:

        @Article{OJIOT_2015v1i2n03_Cho,
            title     = {Accurate Distance Estimation between Things: A Self-correcting Approach},
            author    = {Ho-sik Cho and
                         Jianxun Ji and
                         Zili Chen and
                         Hyuncheol Park and
                         Wonsuk Lee},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {19--27},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244959},
            urn       = {urn:nbn:de:101:1-201704244959},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper suggests a method to measure the physical distance between an IoT device (a Thing) and a mobile device (also a Thing) using BLE (Bluetooth Low-Energy profile) interfaces with smaller distance errors. BLE is a well-known technology for the low-power connectivity and suitable for IoT devices as well as for the proximity with the range of several meters. Apple has already adopted the technique and enhanced it to provide subdivided proximity range levels. However, as it is also a variation of RSS-based distance estimation, Apple's iBeacon could only provide immediate, near or far status but not a real and accurate distance. To provide more accurate distance using BLE, this paper introduces additional self-correcting beacon to calibrate the reference distance and mitigate errors from environmental factors. By adopting self-correcting beacon for measuring the distance, the average distance error shows less than 10\% within the range of 1.5 meters. Some considerations are presented to extend the range to be able to get more accurate distances.}
        }
    
  4.  Open Access 

    IoT-PMA: Patient Health Monitoring in Medical IoT Ecosystems

    Ariane Ziehn, Christian Mandel, Kathrin Stich, Rolf Dembinski, Karin Hochbaum, Steffen Zeuch, Volker Markl

    Open Journal of Internet Of Things (OJIOT), 8(1), Pages 20-31, 2022, Downloads: 25885

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022090515501651013147 | GNL-LP: 1267368519 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The emergence of the Internet of Things (IoT) and the increasing number of cheap medical devices enable geographically distributed healthcare ecosystems of various stakeholders. Such ecosystems contain different application scenarios, e.g., (mobile) patient monitoring using various vital parameters such as heart rate signals. The increasing number of data producers and the transfer of data between medical stakeholders introduce several challenges to the data processing environment, e.g., heterogeneity and distribution of computing and data, lowlatency processing, as well as data security and privacy. Current approaches propose cloud-based solutions introducing latency bottlenecks and high risks for companies dealing with sensitive patient data. In this paper, we address the challenges of medical IoT applications by proposing an end-to-end patient monitoring application that includes NebulaStream as the data processing system, an easy-to-use UI that provides ad-hoc views on the available vital parameters, and the integration of ML models to enable predictions on the patients' health state. Using our end-to-end solution, we implement a real-world patient monitoring scenario for hemodynamic and pulmonary decompensations, which are dynamic and life-threatening deteriorations of lung and cardiovascular functions. Our application provides ad-hoc views of the vital parameters and derived decompensation severity scores with continuous updates on the latest data readings to support timely decision-making by physicians. Furthermore, we envision the infrastructure of an IoT ecosystem for a multi-hospital scenario that enables geo-distributed medical participants to contribute data to the application in a secure, private, and timely manner.

    BibTex:

        @Article{OJIOT_2022v8i1n03_Ziehn,
            title     = {IoT-PMA: Patient Health Monitoring in Medical IoT Ecosystems},
            author    = {Ariane Ziehn and
                         Christian Mandel and
                         Kathrin Stich and
                         Rolf Dembinski and
                         Karin Hochbaum and
                         Steffen Zeuch and
                         Volker Markl},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2022},
            volume    = {8},
            number    = {1},
            pages     = {20--31},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022090515501651013147},
            urn       = {urn:nbn:de:101:1-2022090515501651013147},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The emergence of the Internet of Things (IoT) and the increasing number of cheap medical devices enable geographically distributed healthcare ecosystems of various stakeholders. Such ecosystems contain different application scenarios, e.g., (mobile) patient monitoring using various vital parameters such as heart rate signals. The increasing number of data producers and the transfer of data between medical stakeholders introduce several challenges to the data processing environment, e.g., heterogeneity and distribution of computing and data, lowlatency processing, as well as data security and privacy. Current approaches propose cloud-based solutions introducing latency bottlenecks and high risks for companies dealing with sensitive patient data. In this paper, we address the challenges of medical IoT applications by proposing an end-to-end patient monitoring application that includes NebulaStream as the data processing system, an easy-to-use UI that provides ad-hoc views on the available vital parameters, and the integration of ML models to enable predictions on the patients' health state. Using our end-to-end solution, we implement a real-world patient monitoring scenario for hemodynamic and pulmonary decompensations, which are dynamic and life-threatening deteriorations of lung and cardiovascular functions. Our application provides ad-hoc views of the vital parameters and derived decompensation severity scores with continuous updates on the latest data readings to support timely decision-making by physicians. Furthermore, we envision the infrastructure of an IoT ecosystem for a multi-hospital scenario that enables geo-distributed medical participants to contribute data to the application in a secure, private, and timely manner.}
        }
    
  5.  Open Access 

    A SIEM Architecture for Advanced Anomaly Detection

    Tim Laue, Timo Klecker, Carsten Kleiner, Kai-Oliver Detken

    Open Journal of Big Data (OJBD), 6(1), Pages 26-42, 2022, Downloads: 23764

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022070319330522943055 | GNL-LP: 1261725549 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Dramatic increases in the number of cyber security attacks and breaches toward businesses and organizations have been experienced in recent years. The negative impacts of these breaches not only cause the stealing and compromising of sensitive information, malfunctioning of network devices, disruption of everyday operations, financial damage to the attacked business or organization itself, but also may navigate to peer businesses/organizations in the same industry. Therefore, prevention and early detection of these attacks play a significant role in the continuity of operations in IT-dependent organizations. At the same time detection of various types of attacks has become extremely difficult as attacks get more sophisticated, distributed and enabled by Artificial Intelligence (AI). Detection and handling of these attacks require sophisticated intrusion detection systems which run on powerful hardware and are administered by highly experienced security staff. Yet, these resources are costly to employ, especially for small and medium-sized enterprises (SMEs). To address these issues, we developed an architecture -within the GLACIER project- that can be realized as an in-house operated Security Information Event Management (SIEM) system for SMEs. It is affordable for SMEs as it is solely based on free and open-source components and thus does not require any licensing fees. Moreover, it is a Self-Contained System (SCS) and does not require too much management effort. It requires short configuration and learning phases after which it can be self-contained as long as the monitored infrastructure is stable (apart from a reaction to the generated alerts which may be outsourced to a service provider in SMEs, if necessary). Another main benefit of this system is to supply data to advanced detection algorithms, such as multidimensional analysis algorithms, in addition to traditional SIEMspecific tasks like data collection, normalization, enrichment, and storage. It supports the application of novel methods to detect security-related anomalies. The most distinct feature of this system that differentiates it from similar solutions in the market is its user feedback capability. Detected anomalies are displayed in a Graphical User Interface (GUI) to the security staff who are allowed to give feedback for anomalies. Subsequently, this feedback is utilized to fine-tune the anomaly detection algorithm. In addition, this GUI also provides access to network actors for quick incident responses. The system in general is suitable for both Information Technology (IT) and Operational Technology (OT) environments, while the detection algorithm must be specifically trained for each of these environments individually.

    BibTex:

        @Article{OJBD_2022v6i1n02_Laue,
            title     = {A SIEM Architecture for Advanced Anomaly Detection},
            author    = {Tim Laue and
                         Timo Klecker and
                         Carsten Kleiner and
                         Kai-Oliver Detken},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2022},
            volume    = {6},
            number    = {1},
            pages     = {26--42},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022070319330522943055},
            urn       = {urn:nbn:de:101:1-2022070319330522943055},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Dramatic increases in the number of cyber security attacks and breaches toward businesses and organizations have been experienced in recent years. The negative impacts of these breaches not only cause the stealing and compromising of sensitive information, malfunctioning of network devices, disruption of everyday operations, financial damage to the attacked business or organization itself, but also may navigate to peer businesses/organizations in the same industry. Therefore, prevention and early detection of these attacks play a significant role in the continuity of operations in IT-dependent organizations. At the same time detection of various types of attacks has become extremely difficult as attacks get more sophisticated, distributed and enabled by Artificial Intelligence (AI). Detection and handling of these attacks require sophisticated intrusion detection systems which run on powerful hardware and are administered by highly experienced security staff. Yet, these resources are costly to employ, especially for small and medium-sized enterprises (SMEs). To address these issues, we developed an architecture -within the GLACIER project- that can be realized as an in-house operated Security Information Event Management (SIEM) system for SMEs. It is affordable for SMEs as it is solely based on free and open-source components and thus does not require any licensing fees. Moreover, it is a Self-Contained System (SCS) and does not require too much management effort. It requires short configuration and learning phases after which it can be self-contained as long as the monitored infrastructure is stable (apart from a reaction to the generated alerts which may be outsourced to a service provider in SMEs, if necessary). Another main benefit of this system is to supply data to advanced detection algorithms, such as multidimensional analysis algorithms, in addition to traditional SIEMspecific tasks like data collection, normalization, enrichment, and storage. It supports the application of novel methods to detect security-related anomalies. The most distinct feature of this system that differentiates it from similar solutions in the market is its user feedback capability. Detected anomalies are displayed in a Graphical User Interface (GUI) to the security staff who are allowed to give feedback for anomalies. Subsequently, this feedback is utilized to fine-tune the anomaly detection algorithm. In addition, this GUI also provides access to network actors for quick incident responses. The system in general is suitable for both Information Technology (IT) and Operational Technology (OT) environments, while the detection algorithm must be specifically trained for each of these environments individually.}
        }
    
  6.  Open Access 

    WoTHive: Enabling Syntactic and Semantic Discovery in the Web of Things

    Andrea Cimmino, Raúl García-Castro

    Open Journal of Internet Of Things (OJIOT), 8(1), Pages 54-65, 2022, Downloads: 22698

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022090515503251402854 | GNL-LP: 126736856X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: In the last decade the Internet of Things (IoT) has experienced a significant growth and its adoption has become ubiquitous in either business and private life. As a result, several initiatives have emerged for addressing specific challenges and provide a standard or a specification to address them; like CoRE, Web of Things (WoT), oneM2M, or OGC among others. One of these challenges revolves around the discovery procedures to find IoT devices within IoT infrastructures and whether the discovery performed is semantic or syntactic. This article focusses on the WoT initiative and reports the benefits that Semantic Web technologies bring to discovery in WoT. In particular, one of the implementations for the WoT discovery is presented, which is named WoTHive and provides syntactic and semantic discovery capabilities. WoTHive is the only candidate implementation that addresses at the same time the syntactic and semantic functionalities specified in the discovery described by WoT. Several experiments have been carried out to test WoTHive; these advocate that the implementation is technically sound for CRUD operations and that its semantic discovery outperforms the syntactic one implemented. Furthermore, an experiment has been carried out to compare whether syntactic discovery is faster than semantic discovery using the Link Smart implementation for syntactic discovery and WoTHive for semantic.

    BibTex:

        @Article{OJIOT_2022v8i1n06_Cimmino,
            title     = {WoTHive: Enabling Syntactic and Semantic Discovery in the Web of Things},
            author    = {Andrea Cimmino and
                          Ra\'{u}l Garc\'{i}a-Castro},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2022},
            volume    = {8},
            number    = {1},
            pages     = {54--65},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022090515503251402854},
            urn       = {urn:nbn:de:101:1-2022090515503251402854},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {In the last decade the Internet of Things (IoT) has experienced a significant growth and its adoption has become ubiquitous in either business and private life. As a result, several initiatives have emerged for addressing specific challenges and provide a standard or a specification to address them; like CoRE, Web of Things (WoT), oneM2M, or OGC among others. One of these challenges revolves around the discovery procedures to find IoT devices within IoT infrastructures and whether the discovery performed is semantic or syntactic. This article focusses on the WoT initiative and reports the benefits that Semantic Web technologies bring to discovery in WoT. In particular, one of the implementations for the WoT discovery is presented, which is named WoTHive and provides syntactic and semantic discovery capabilities. WoTHive is the only candidate implementation that addresses at the same time the syntactic and semantic functionalities specified in the discovery described by WoT. Several experiments have been carried out to test WoTHive; these advocate that the implementation is technically sound for CRUD operations and that its semantic discovery outperforms the syntactic one implemented. Furthermore, an experiment has been carried out to compare whether syntactic discovery is faster than semantic discovery using the Link Smart implementation for syntactic discovery and WoTHive for semantic.}
        }
    
  7.  Open Access 

    Closing the Gap between Web Applications and Desktop Applications by Designing a Novel Desktop-as-a-Service (DaaS) with Seamless Support for Desktop Applications

    Christian Baun, Johannes Bouche

    Open Journal of Cloud Computing (OJCC), 8(1), Pages 1-19, 2023, Downloads: 22389

    Full-Text: pdf | URN: urn:nbn:de:101:1-2023111918330784257276 | GNL-LP: 1310400962 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: An increasing transformation from locally deployed applications to remote web applications has occurred for about two decades. Nevertheless, abandoning established and essential Windows or Linux desktop applications is in many scenarios impossible. This paper describes and evaluates existing Desktop-as-a-Service solutions and the components required for developing a novel DaaS. Based on the conclusions and findings of this analysis, the paper describes a novel approach for a Desktop-as-a-Service solution that enables, as a unique characteristic, the deployment of non-modified Linux and Windows applications. The interaction with these applications is done entirely through a browser which is unusual for remote interaction with Windows or Linux desktop applications but brings many benefits from the user's point of view because installing any additional client software or local virtualization solution becomes unnecessary. A solution, as described in this paper, has many advantages and offers excellent potential for use in academia, research, industry, and administration.

    BibTex:

        @Article{OJCC_2023v8i1n01_Baun,
            title     = {Closing the Gap between Web Applications and Desktop Applications by Designing a Novel Desktop-as-a-Service (DaaS) with Seamless Support for Desktop Applications},
            author    = {Christian Baun and
                         Johannes Bouche},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2023},
            volume    = {8},
            number    = {1},
            pages     = {1--19},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2023111918330784257276},
            urn       = {urn:nbn:de:101:1-2023111918330784257276},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {An increasing transformation from locally deployed applications to remote web applications has occurred for about two decades. Nevertheless, abandoning established and essential Windows or Linux desktop applications is in many scenarios impossible. This paper describes and evaluates existing Desktop-as-a-Service solutions and the components required for developing a novel DaaS. Based on the conclusions and findings of this analysis, the paper describes a novel approach for a  Desktop-as-a-Service solution that enables, as a unique characteristic, the deployment of non-modified Linux and Windows applications. The interaction with these applications is done entirely through a browser which is unusual for remote interaction with Windows or Linux desktop applications but brings many benefits from the user's point of view because installing any additional client software or local virtualization solution becomes unnecessary. A solution, as described in this paper, has many advantages and offers excellent potential for use in academia, research, industry, and administration.}
        }
    
  8.  Open Access 

    Generating SPARQL-Constraints for Consistency Checking in Industry 4.0 Scenarios

    Simon Paasche, Sven Groppe

    Open Journal of Internet Of Things (OJIOT), 8(1), Pages 80-90, 2022, Downloads: 21995

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022090515504409042979 | GNL-LP: 1267368594 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: A smart manufacturing line consists of multiple connected machines. These machines communicate with each other over a network, to solve a common task. Such a scenario can be located in the Internet of Things (IoT) area. An individual machine can be perceived as an IoT device. Due to machine to machine communication, a huge amount of data is generated during manufacturing. This emerging data flow is an essential part of today's industry, as analyzing data helps improving processes and thus, product quality. To adequately make use of the collected data, we require a high level of data quality. In our work, we address the issue of inconsistent data in smart manufacturing and present an approach to automatically generate SPARQL queries for validation.

    BibTex:

        @Article{OJIOT_2022v8i1n08_Paasche,
            title     = {Generating SPARQL-Constraints for Consistency Checking in Industry 4.0 Scenarios},
            author    = {Simon Paasche and
                         Sven Groppe},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2022},
            volume    = {8},
            number    = {1},
            pages     = {80--90},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022090515504409042979},
            urn       = {urn:nbn:de:101:1-2022090515504409042979},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {A smart manufacturing line consists of multiple connected machines. These machines communicate with each other over a network, to solve a common task. Such a scenario can be located in the Internet of Things (IoT) area. An individual machine can be perceived as an IoT device. Due to machine to machine communication, a huge amount of data is generated during manufacturing. This emerging data flow is an essential part of today's industry, as analyzing data helps improving processes and thus, product quality. To adequately make use of the collected data, we require a high level of data quality. In our work, we address the issue of inconsistent data in smart manufacturing and present an approach to automatically generate SPARQL queries for validation.}
        }
    
  9.  Open Access 

    3D Histogram Based Anomaly Detection for Categorical Sensor Data in Internet of Things

    Peng Yuan, Lu-An Tang, Haifeng Chen, Moto Sato, Kevin Woodward

    Open Journal of Internet Of Things (OJIOT), 8(1), Pages 32-43, 2022, Downloads: 21679

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022090515502212539652 | GNL-LP: 1267368527 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The applications of Internet-of-things (IoT) deploy massive number of sensors to monitor the system and environment. Anomaly detection on streaming sensor data is an important task for IoT maintenance and operation. In real IoT applications, many sensors report categorical values rather than numerical readings. Unfortunately, most existing anomaly detection methods are designed only for numerical sensor data. They cannot be used to monitor the categorical sensor data. In this study, we design and develop a 3D Histogram based Categorical Anomaly Detection (HCAD) solution to monitor categorical sensor data in IoT. HCAD constructs the histogram model by three dimensions: categorical value, event duration, and frequency. The histogram models are used to profile normal working states of IoT devices. HCAD automatically determines the range of normal data and anomaly threshold. It only requires very limit parameter setting and can be applied to a wide variety of different IoT devices. We implement HCAD and integrate it into an online monitoring system. We test the proposed solution on real IoT datasets such as telemetry data from satellite sensors, air quality data from chemical sensors, and transportation data from traffic sensors. The results of extensive experiments show that HCAD achieves higher detecting accuracy and efficiency than state-of-the-art methods.

    BibTex:

        @Article{OJIOT_2022v8i1n04_Yuan,
            title     = {3D Histogram Based Anomaly Detection for Categorical Sensor Data in Internet of Things},
            author    = {Peng Yuan and
                         Lu-An Tang and
                         Haifeng Chen and
                         Moto Sato and
                         Kevin Woodward},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2022},
            volume    = {8},
            number    = {1},
            pages     = {32--43},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022090515502212539652},
            urn       = {urn:nbn:de:101:1-2022090515502212539652},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The applications of Internet-of-things (IoT) deploy massive number of sensors to monitor the system and environment. Anomaly detection on streaming sensor data is an important task for IoT maintenance and operation. In real IoT applications, many sensors report categorical values rather than numerical readings. Unfortunately, most existing anomaly detection methods are designed only for numerical sensor data. They cannot be used to monitor the categorical sensor data. In this study, we design and develop a 3D Histogram based Categorical Anomaly Detection (HCAD) solution to monitor categorical sensor data in IoT. HCAD constructs the histogram model by three dimensions: categorical value, event duration, and frequency. The histogram models are used to profile normal working states of IoT devices. HCAD automatically determines the range of normal data and anomaly threshold. It only requires very limit parameter setting and can be applied to a wide variety of different IoT devices. We implement HCAD and integrate it into an online monitoring system. We test the proposed solution on real IoT datasets such as telemetry data from satellite sensors, air quality data from chemical sensors, and transportation data from traffic sensors. The results of extensive experiments show that HCAD achieves higher detecting accuracy and efficiency than state-of-the-art methods.}
        }
    
  10.  Open Access 

    Space Cubes: Satellite On-Board Processing of Datacube Queries

    Dimitar Misev, Peter Baumann

    Open Journal of Internet Of Things (OJIOT), 8(1), Pages 44-53, 2022, Downloads: 21567

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022090515502699550987 | GNL-LP: 1267368543 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Datacubes form an accepted cornerstone for analysis- and visualization-ready spatio-temporal data offerings. The increase in user friendliness is achieved by abstracting away from the zillions of files in provider-specific organization. Datacube query languages additionally establish actionable datacubes, enabling users to ask "any query, any time" with zero coding. However, typically datacube deployments are aiming at large scale, data center environments accommodating Big Data and massive parallel processing capabilities for achieving decent performance. In this contribution, we conversely report about a downscaling experiment. In the ORBiDANSE project a datacube engine, rasdaman, has been ported to a cubesat, ESA OPS-SAT, and is operational in space. Effectively, the satellite thereby becomes a datacube service offering the standards-based query capabilities of the OGC Web Coverage Processing (WCPS) geo datacube analytics language. We believe this will pave the way for on-board ad-hoc processing and filtering on Big EO Data, thereby unleashing them to a larger audience and in substantially shorter time.

    BibTex:

        @Article{OJIOT_2022v8i1n05_Misev,
            title     = {Space Cubes: Satellite On-Board Processing of Datacube Queries},
            author    = {Dimitar Misev and
                         Peter Baumann},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2022},
            volume    = {8},
            number    = {1},
            pages     = {44--53},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022090515502699550987},
            urn       = {urn:nbn:de:101:1-2022090515502699550987},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Datacubes form an accepted cornerstone for analysis- and visualization-ready spatio-temporal data offerings. The increase in user friendliness is achieved by abstracting away from the zillions of files in provider-specific organization. Datacube query languages additionally establish actionable datacubes, enabling users to ask "any query, any time" with zero coding. However, typically datacube deployments are aiming at large scale, data center environments accommodating Big Data and massive parallel processing capabilities for achieving decent performance. In this contribution, we conversely report about a downscaling experiment. In the ORBiDANSE project a datacube engine, rasdaman, has been ported to a cubesat, ESA OPS-SAT, and is operational in space. Effectively, the satellite thereby becomes a datacube service offering the standards-based query capabilities of the OGC Web Coverage Processing (WCPS) geo datacube analytics language. We believe this will pave the way for on-board ad-hoc processing and filtering on Big EO Data, thereby unleashing them to a larger audience and in substantially shorter time.}
        }
    
  11.  Open Access 

    Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing

    Karolj Skala, Davor Davidovic, Enis Afgan, Ivan Sovic, Zorislav Sojat

    Open Journal of Cloud Computing (OJCC), 2(1), Pages 16-24, 2015, Downloads: 21375, Citations: 168

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194519 | GNL-LP: 1132360749 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.

    BibTex:

        @Article{OJCC_2015v2i1n03_Skala,
            title     = {Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing},
            author    = {Karolj Skala and
                         Davor Davidovic and
                         Enis Afgan and
                         Ivan Sovic and
                         Zorislav Sojat},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {16--24},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194519},
            urn       = {urn:nbn:de:101:1-201705194519},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.}
        }
    
  12.  Open Access 

    Contributions to the 6th Workshop on Very Large Internet of Things (VLIoT 2022)

    Sven Groppe, Sanju Tiwari, Shui Yu

    Open Journal of Internet Of Things (OJIOT), 8(1), Pages 1-6, 2022, Downloads: 21346

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022090515500516853039 | GNL-LP: 1267368470 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The concept of the Internet of Things, where small things become available in the Internet and get connected with each other for the purpose of advanced applications, raises many new open challenges to research. This even increases when considering large-scale Internet-of-Things (IoT) configurations, which is the focus of our Very Large Internet of Things (VLIoT) workshop. We recognize that the IoT research community is very active and the industry continuously develops novel IoT applications for daily live. Hence we received many high-quality submissions, from which we accepted 7 to be introduced in this editorial.

    BibTex:

        @Article{OJIOT_2022v8i1n01e_VLIOT2022,
            title     = {Contributions to the 6th Workshop on Very Large Internet of Things (VLIoT 2022)},
            author    = {Sven Groppe and
                         Sanju Tiwari and
                         Shui Yu},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2022},
            volume    = {8},
            number    = {1},
            pages     = {1--6},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022090515500516853039},
            urn       = {urn:nbn:de:101:1-2022090515500516853039},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The concept of the Internet of Things, where small things become available in the Internet and get connected with each other for the purpose of advanced applications, raises many new open challenges to research. This even increases when considering large-scale Internet-of-Things (IoT) configurations, which is the focus of our Very Large Internet of Things (VLIoT) workshop. We recognize that the IoT research community is very active and the industry continuously develops novel IoT applications for daily live. Hence we received many high-quality submissions, from which we accepted 7 to be introduced in this editorial.}
        }
    
  13.  Open Access 

    IoT Hub as a Service (HaaS): Data-Oriented Environment for Interactive Smart Spaces

    Ahmed E. Khaled, Rousol Al Goboori

    Open Journal of Internet Of Things (OJIOT), 8(1), Pages 66-79, 2022, Downloads: 20566

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022090515503839588667 | GNL-LP: 1267368578 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Smart devices around us produce a considerable volume of data and interact in a wide range of scenarios that guide the evolution of the Internet of Things (IoT). IoT adds informative and interactive aspects to our living spaces, converting them into smart spaces. However, the development of applications is challenged by the fragmented nature due to the vast number of different IoT things, the format of reported information, communication standards, and the techniques used to design applications. This paper introduces IoT Hub as a Service (HaaS), a data-oriented framework to enable communication interoperability between the ecosystem's entities. The framework abstracts things' information, reported data items, and developers' applications into programmable objects referred to as Cards. Cards represent specific entities and interactions of focus with meta-data. The framework then indexes cards' meta-data to enable interoperability, data management, and application development. The framework allows users to create virtual smart spaces (VSS) to define cards' accessibility and visibility. Within VSS, users can identify accessible data items, things to communicate, and authorized applications. The framework, in this paper, defines four types of Cards to represent: participating IoT things, data items, VSS, and applications. The proposed framework enables the development of synchronous and asynchronous applications. The framework dynamically creates, updates, and links the cards throughout the life-cycle of the different entities. We present the details of the proposed framework and show how our framework is advantageous and applicable.

    BibTex:

        @Article{OJIOT_2022v8i1n07_Khaled,
            title     = {IoT Hub as a Service (HaaS): Data-Oriented Environment for Interactive Smart Spaces},
            author    = {Ahmed E. Khaled and
                         Rousol Al Goboori},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2022},
            volume    = {8},
            number    = {1},
            pages     = {66--79},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022090515503839588667},
            urn       = {urn:nbn:de:101:1-2022090515503839588667},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Smart devices around us produce a considerable volume of data and interact in a wide range of scenarios that guide the evolution of the Internet of Things (IoT). IoT adds informative and interactive aspects to our living spaces, converting them into smart spaces. However, the development of applications is challenged by the fragmented nature due to the vast number of different IoT things, the format of reported information, communication standards, and the techniques used to design applications. This paper introduces IoT Hub as a Service (HaaS), a data-oriented framework to enable communication interoperability between the ecosystem's entities. The framework abstracts things' information, reported data items, and developers' applications into programmable objects referred to as Cards. Cards represent specific entities and interactions of focus with meta-data. The framework then indexes cards' meta-data to enable interoperability, data management, and application development. The framework allows users to create virtual smart spaces (VSS) to define cards' accessibility and visibility. Within VSS, users can identify accessible data items, things to communicate, and authorized applications. The framework, in this paper, defines four types of Cards to represent: participating IoT things, data items, VSS, and applications. The proposed framework enables the development of synchronous and asynchronous applications. The framework dynamically creates, updates, and links the cards throughout the life-cycle of the different entities. We present the details of the proposed framework and show how our framework is advantageous and applicable.}
        }
    
  14.  Open Access 

    Development and Evaluation of a Publish/Subscribe IoT Data Sharing Model with LoRaWAN

    Juan Leon, Yacoub Hanna, Kemal Akkaya

    Open Journal of Internet Of Things (OJIOT), 8(1), Pages 7-19, 2022, Downloads: 20425

    Full-Text: pdf | URN: urn:nbn:de:101:1-2022090515501014226277 | GNL-LP: 1267368497 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Publish/subscribe architectures are becoming very common for many IoT environments such as power grid, manufacturing and factory automation. In these architectures, many different communication standards and middleware can be supported to ensure interoperability. One of the widely used publish/subscribe protocol is MQTT where a broker acts among publishers and subscribers to relay data on certain topics. While MQTT can be easily setup on cloud environments to perform research experiments, its large-scale and quick deployment for IoT environments with a widely used wireless MAC layer protocol such as LoRaWAN has not been thoroughly tested. Therefore, in this paper we develop and present a simulation framework in NS-3 to offer MQTT-based on publish/subscribe architecture that can also support LoRaWAN communication standard. To this end, we utilize NS-3's LoRaWAN library and integrate it with a broker that connects to other types of publishers/subscribers. We enable unicast capability from the broker to LoRaWAN end-devices while supporting multiple topics at the broker. We tested several scenarios under this IoT architecture to demonstrate its feasibility while assessing the performance at scale.

    BibTex:

        @Article{OJIOT_2022v8i1n02_Leon,
            title     = {Development and Evaluation of a Publish/Subscribe IoT Data Sharing Model with LoRaWAN},
            author    = {Juan Leon and
                         Yacoub Hanna and
                         Kemal Akkaya},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2022},
            volume    = {8},
            number    = {1},
            pages     = {7--19},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022090515501014226277},
            urn       = {urn:nbn:de:101:1-2022090515501014226277},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Publish/subscribe architectures are becoming very common for many IoT environments such as power grid, manufacturing and factory automation. In these architectures, many different communication standards and middleware can be supported to ensure interoperability. One of the widely used publish/subscribe protocol is MQTT where a broker acts among publishers and subscribers to relay data on certain topics. While MQTT can be easily setup on cloud environments to perform research experiments, its large-scale and quick deployment for IoT environments with a widely used wireless MAC layer protocol such as LoRaWAN has not been thoroughly tested. Therefore, in this paper we develop and present a simulation framework in NS-3 to offer MQTT-based on publish/subscribe architecture that can also support LoRaWAN communication standard. To this end, we utilize NS-3's LoRaWAN library and integrate it with a broker that connects to other types of publishers/subscribers. We enable unicast capability from the broker to LoRaWAN end-devices while supporting multiple topics at the broker. We tested several scenarios under this IoT architecture to demonstrate its feasibility while assessing the performance at scale.}
        }
    
  15.  Open Access 

    Eventual Consistent Databases: State of the Art

    Mawahib Musa Elbushra, Jan Lindström

    Open Journal of Databases (OJDB), 1(1), Pages 26-41, 2014, Downloads: 19433, Citations: 15

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194582 | GNL-LP: 1132360846 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: One of the challenges of cloud programming is to achieve the right balance between the availability and consistency in a distributed database. Cloud computing environments, particularly cloud databases, are rapidly increasing in importance, acceptance and usage in major applications, which need the partition-tolerance and availability for scalability purposes, but sacrifice the consistency side (CAP theorem). In these environments, the data accessed by users is stored in a highly available storage system, thus the use of paradigms such as eventual consistency became more widespread. In this paper, we review the state-of-the-art database systems using eventual consistency from both industry and research. Based on this review, we discuss the advantages and disadvantages of eventual consistency, and identify the future research challenges on the databases using eventual consistency.

    BibTex:

        @Article{OJDB-v1i1n03_Elbushra,
            title     = {Eventual Consistent Databases: State of the Art},
            author    = {Mawahib Musa Elbushra and
                         Jan Lindstr{\"o}m},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {26--41},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194582},
            urn       = {urn:nbn:de:101:1-201705194582},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {One of the challenges of cloud programming is to achieve the right balance between the availability and consistency in a distributed database. Cloud computing environments, particularly cloud databases, are rapidly increasing in importance, acceptance and usage in major applications, which need the partition-tolerance and availability for scalability purposes, but sacrifice the consistency side (CAP theorem). In these environments, the data accessed by users is stored in a highly available storage system, thus the use of paradigms such as eventual consistency became more widespread. In this paper, we review the state-of-the-art database systems using eventual consistency from both industry and research. Based on this review, we discuss the advantages and disadvantages of eventual consistency, and identify the future research challenges on the databases using eventual consistency.}
        }
    
  16.  Open Access 

    Criteria of Successful IT Projects from Management's Perspective

    Mark Harwardt

    Open Journal of Information Systems (OJIS), 3(1), Pages 29-54, 2016, Downloads: 18768, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194797 | GNL-LP: 1132361133 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even.

    BibTex:

        @Article{OJIS_2016v3i1n02_Harwardt,
            title     = {Criteria of Successful IT Projects from Management's Perspective},
            author    = {Mark Harwardt},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {29--54},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194797},
            urn       = {urn:nbn:de:101:1-201705194797},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even.}
        }
    
  17.  Open Access 

    Causal Consistent Databases

    Mawahib Musa Elbushra, Jan Lindström

    Open Journal of Databases (OJDB), 2(1), Pages 17-35, 2015, Downloads: 15305, Citations: 4

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194619 | GNL-LP: 1132360870 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Many consistency criteria have been considered in databases and the causal consistency is one of them. The causal consistency model has gained much attention in recent years because it provides ordering of relative operations. The causal consistency requires that all writes, which are potentially causally related, must be seen in the same order by all processes. The causal consistency is a weaker criteria than the sequential consistency, because there exists an execution, which is causally consistent but not sequentially consistent, however all executions satisfying the sequential consistency are also causally consistent. Furthermore, the causal consistency supports non-blocking operations; i.e. processes may complete read or write operations without waiting for global computation. Therefore, the causal consistency overcomes the primary limit of stronger criteria: communication latency. Additionally, several application semantics are precisely captured by the causal consistency, e.g. collaborative tools. In this paper, we review the state-of-the-art of causal consistent databases, discuss the features, functionalities and applications of the causal consistency model, and systematically compare it with other consistency models. We also discuss the implementation of causal consistency databases and identify limitations of the causal consistency model.

    BibTex:

        @Article{OJDB_2015v2i1n02_Elbushra,
            title     = {Causal Consistent Databases},
            author    = {Mawahib Musa Elbushra and
                         Jan Lindstr{\"o}m},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {17--35},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194619},
            urn       = {urn:nbn:de:101:1-201705194619},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Many consistency criteria have been considered in databases and the causal consistency is one of them. The causal consistency model has gained much attention in recent years because it provides ordering of relative operations. The causal consistency requires that all writes, which are potentially causally related, must be seen in the same order by all processes. The causal consistency is a weaker criteria than the sequential consistency, because there exists an execution, which is causally consistent but not sequentially consistent, however all executions satisfying the sequential consistency are also causally consistent. Furthermore, the causal consistency supports non-blocking operations; i.e. processes may complete read or write operations without waiting for global computation. Therefore, the causal consistency overcomes the primary limit of stronger criteria: communication latency. Additionally, several application semantics are precisely captured by the causal consistency, e.g. collaborative tools. In this paper, we review the state-of-the-art of causal consistent databases, discuss the features, functionalities and applications of the causal consistency model, and systematically compare it with other consistency models. We also discuss the implementation of causal consistency databases and identify limitations of the causal consistency model.}
        }
    
  18.  Open Access 

    Developing Knowledge Models of Social Media: A Case Study on LinkedIn

    Jinwu Li, Vincent Wade, Melike Sah

    Open Journal of Semantic Web (OJSW), 1(2), Pages 1-24, 2014, Downloads: 14366

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194841 | GNL-LP: 1132361206 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: User Generated Content (UGC) exchanged via large Social Network is considered a very important knowledge source about all aspects of the social engagements (e.g. interests, events, personal information, personal preferences, social experience, skills etc.). However this data is inherently unstructured or semi-structured. In this paper, we describe the results of a case study on LinkedIn Ireland public profiles. The study investigated how the available knowledge could be harvested from LinkedIn in a novel way by developing and applying a reusable knowledge model using linked open data vocabularies and semantic web. In addition, the paper discusses the crawling and data normalisation strategies that we developed, so that high quality metadata could be extracted from the LinkedIn public profiles. Apart from the search engine in LinkedIn.com itself, there are no well known publicly available endpoints that allow users to query knowledge concerning the interests of individuals on LinkedIn. In particular, we present a system that extracts and converts information from raw web pages of LinkedIn public profiles into a machine-readable, interoperable format using data mining and Semantic Web technologies. The outcomes of our research can be summarized as follows: (1) A reusable knowledge model which can represent LinkedIn public users and company profiles using linked data vocabularies and structured data, (2) a public SPARQL endpoint to access structured data about Irish industry and public profiles, (3) a scalable data crawling strategy and mashup based data normalisation approach. The proposed data mining and knowledge representation proposed in this paper are evaluated in four ways: (1) We evaluate metadata quality using automated techniques, such as data completeness and data linkage. (2) Data accuracy is evaluated via user studies. In particular, accuracy is evaluated by comparison of manually entered metadata fields and the metadata which was automatically extracted. (3) User perceived metadata quality is measured by asking users to rate the automatically extracted metadata in user studies. (4) Finally, the paper discusses how the extracted metadata suits for a user interface design. Overall, the evaluations show that the extracted metadata is of high quality and meets the requirements of a data visualisation user interface.

    BibTex:

        @Article{OJSW-v1i2n01_Li,
            title     = {Developing Knowledge Models of Social Media: A Case Study on LinkedIn},
            author    = {Jinwu Li and
                         Vincent Wade and
                         Melike Sah},
            journal   = {Open Journal of Semantic Web (OJSW)},
            issn      = {2199-336X},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {1--24},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194841},
            urn       = {urn:nbn:de:101:1-201705194841},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {User Generated Content (UGC) exchanged via large Social Network is considered a very important knowledge source about all aspects of the social engagements (e.g. interests, events, personal information, personal preferences, social experience, skills etc.). However this data is inherently unstructured or semi-structured. In this paper, we describe the results of a case study on LinkedIn Ireland public profiles. The study investigated how the available knowledge could be harvested from LinkedIn in a novel way by developing and applying a reusable knowledge model using linked open data vocabularies and semantic web. In addition, the paper discusses the crawling and data normalisation strategies that we developed, so that high quality metadata could be extracted from the LinkedIn public profiles. Apart from the search engine in LinkedIn.com itself, there are no well known publicly available endpoints that allow users to query knowledge concerning the interests of individuals on LinkedIn. In particular, we present a system that extracts and converts information from raw web pages of LinkedIn public profiles into a machine-readable, interoperable format using data mining and Semantic Web technologies. The outcomes of our research can be summarized as follows: (1) A reusable knowledge model which can represent LinkedIn public users and company profiles using linked data vocabularies and structured data, (2) a public SPARQL endpoint to access structured data about Irish industry and public profiles, (3) a scalable data crawling strategy and mashup based data normalisation approach. The proposed data mining and knowledge representation proposed in this paper are evaluated in four ways: (1) We evaluate metadata quality using automated techniques, such as data completeness and data linkage. (2) Data accuracy is evaluated via user studies. In particular, accuracy is evaluated by comparison of manually entered metadata fields and the metadata which was automatically extracted. (3) User perceived metadata quality is measured by asking users to rate the automatically extracted metadata in user studies. (4) Finally, the paper discusses how the extracted metadata suits for a user interface design. Overall, the evaluations show that the extracted metadata is of high quality and meets the requirements of a data visualisation user interface.}
        }
    
  19.  Open Access 

    The Potential of Printed Electronics and Personal Fabrication in Driving the Internet of Things

    Paulo Rosa, António Câmara, Cristina Gouveia

    Open Journal of Internet Of Things (OJIOT), 1(1), Pages 16-36, 2015, Downloads: 13921, Citations: 43

    Full-Text: pdf | URN: urn:nbn:de:101:1-201704244933 | GNL-LP: 1130621448 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: In the early nineties, Mark Weiser, a chief scientist at the Xerox Palo Alto Research Center (PARC), wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. Within this vision, computers and others digital technologies are integrated seamlessly into everyday objects and activities, hidden from our senses whenever not used or needed. An important facet of this vision is the interconnectivity of the various physical devices, which creates an Internet of Things. With the advent of Printed Electronics, new ways to link the physical and digital worlds became available. Common printing technologies, such as screen, flexography, and inkjet printing, are now starting to be used not only to mass-produce extremely thin, flexible and cost effective electronic circuits, but also to introduce electronic functionality into objects where it was previously unavailable. In turn, the growing accessibility to Personal Fabrication tools is leading to the democratization of the creation of technology by enabling end-users to design and produce their own material goods according to their needs. This paper presents a survey of commonly used technologies and foreseen applications in the field of Printed Electronics and Personal Fabrication, with emphasis on the potential to drive the Internet of Things.

    BibTex:

        @Article{OJIOT_2015v1i1n03_Rosa,
            title     = {The Potential of Printed Electronics and Personal Fabrication in Driving the Internet of Things},
            author    = {Paulo Rosa and
                         Ant\'{o}nio C\^{a}mara and
                         Cristina Gouveia},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2015},
            volume    = {1},
            number    = {1},
            pages     = {16--36},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244933},
            urn       = {urn:nbn:de:101:1-201704244933},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {In the early nineties, Mark Weiser, a chief scientist at the Xerox Palo Alto Research Center (PARC), wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. Within this vision, computers and others digital technologies are integrated seamlessly into everyday objects and activities, hidden from our senses whenever not used or needed. An important facet of this vision is the interconnectivity of the various physical devices, which creates an Internet of Things. With the advent of Printed Electronics, new ways to link the physical and digital worlds became available. Common printing technologies, such as screen, flexography, and inkjet printing, are now starting to be used not only to mass-produce extremely thin, flexible and cost effective electronic circuits, but also to introduce electronic functionality into objects where it was previously unavailable. In turn, the growing accessibility to Personal Fabrication tools is leading to the democratization of the creation of technology by enabling end-users to design and produce their own material goods according to their needs. This paper presents a survey of commonly used technologies and foreseen applications in the field of Printed Electronics and Personal Fabrication, with emphasis on the potential to drive the Internet of Things.}
        }
    
  20.  Open Access 

    P-LUPOSDATE: Using Precomputed Bloom Filters to Speed Up SPARQL Processing in the Cloud

    Sven Groppe, Thomas Kiencke, Stefan Werner, Dennis Heinrich, Marc Stelzner, Le Gruenwald

    Open Journal of Semantic Web (OJSW), 1(2), Pages 25-55, 2014, Downloads: 13920, Citations: 3

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194858 | GNL-LP: 1132361214 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Presentation: Video

    Abstract: Increasingly data on the Web is stored in the form of Semantic Web data. Because of today's information overload, it becomes very important to store and query these big datasets in a scalable way and hence in a distributed fashion. Cloud Computing offers such a distributed environment with dynamic reallocation of computing and storing resources based on needs. In this work we introduce a scalable distributed Semantic Web database in the Cloud. In order to reduce the number of (unnecessary) intermediate results early, we apply bloom filters. Instead of computing bloom filters, a time-consuming task during query processing as it has been done traditionally, we precompute the bloom filters as much as possible and store them in the indices besides the data. The experimental results with data sets up to 1 billion triples show that our approach speeds up query processing significantly and sometimes even reduces the processing time to less than half.

    BibTex:

        @Article{OJSW-v1i2n02_Groppe,
            title     = {P-LUPOSDATE: Using Precomputed Bloom Filters to Speed Up SPARQL Processing in the Cloud},
            author    = {Sven Groppe and
                         Thomas Kiencke and
                         Stefan Werner and
                         Dennis Heinrich and
                         Marc Stelzner and
                         Le Gruenwald},
            journal   = {Open Journal of Semantic Web (OJSW)},
            issn      = {2199-336X},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {25--55},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194858},
            urn       = {urn:nbn:de:101:1-201705194858},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Increasingly data on the Web is stored in the form of Semantic Web data. Because of today's information overload, it becomes very important to store and query these big datasets in a scalable way and hence in a distributed fashion. Cloud Computing offers such a distributed environment with dynamic reallocation of computing and storing resources based on needs. In this work we introduce a scalable distributed Semantic Web database in the Cloud. In order to reduce the number of (unnecessary) intermediate results early, we apply bloom filters. Instead of computing bloom filters, a time-consuming task during query processing as it has been done traditionally, we precompute the bloom filters as much as possible and store them in the indices besides the data. The experimental results with data sets up to 1 billion triples show that our approach speeds up query processing significantly and sometimes even reduces the processing time to less than half.}
        }
    
  21.  Open Access 

    Getting Indexed by Bibliographic Databases in the Area of Computer Science

    Arne Kusserow, Sven Groppe

    Open Journal of Web Technologies (OJWT), 1(2), Pages 10-27, 2014, Downloads: 13394, Citations: 2

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705291343 | GNL-LP: 1133021557 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Every author and publisher is interested in adding their publications to the widely used bibliographic databases freely accessible in the world wide web: This ensures the visibility of their publications and hence of the published research. However, the inclusion requirements of publications in the bibliographic databases are heterogeneous even on the technical side. This survey paper aims in shedding light on the various data formats, protocols and technical requirements of getting indexed by widely used bibliographic databases in the area of computer science and provides hints for maximal database inclusion. Furthermore, we point out the possibilities to utilize the data of bibliographic databases, and describes some personal and institutional research repository systems with special regard to the support of inclusion in bibliographic databases.

    BibTex:

        @Article{OJWT_2014v1i2n02_Kusserow,
            title     = {Getting Indexed by Bibliographic Databases in the Area of Computer Science},
            author    = {Arne Kusserow and
                         Sven Groppe},
            journal   = {Open Journal of Web Technologies (OJWT)},
            issn      = {2199-188X},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {10--27},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291343},
            urn       = {urn:nbn:de:101:1-201705291343},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Every author and publisher is interested in adding their publications to the widely used bibliographic databases freely accessible in the world wide web: This ensures the visibility of their publications and hence of the published research. However, the inclusion requirements of publications in the bibliographic databases are heterogeneous even on the technical side. This survey paper aims in shedding light on the various data formats, protocols and technical requirements of getting indexed by widely used bibliographic databases in the area of computer science and provides hints for maximal database inclusion. Furthermore, we point out the possibilities to utilize the data of bibliographic databases, and describes some personal and institutional research repository systems with special regard to the support of inclusion in bibliographic databases.}
        }
    
  22.  Open Access 

    Data Transfers in Hadoop: A Comparative Study

    Ujjal Marjit, Kumar Sharma, Puspendu Mandal

    Open Journal of Big Data (OJBD), 1(2), Pages 34-46, 2015, Downloads: 13344, Citations: 4

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194373 | GNL-LP: 1132360536 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Hadoop is an open source framework for processing large amounts of data in distributed computing environment. It plays an important role in processing and analyzing the Big Data. This framework is used for storing data on large clusters of commodity hardware. Data input and output to and from Hadoop is an indispensable action for any data processing job. At present, many tools have been evolved for importing and exporting Data in Hadoop. In this article, some commonly used tools for importing and exporting data have been emphasized. Moreover, a state-of-the-art comparative study among the various tools has been made. With this study, it has been decided that where to use one tool over the other with emphasis on the data transfer to and from Hadoop system. This article also discusses about how Hadoop handles backup and disaster recovery along with some open research questions in terms of Big Data transfer when dealing with cloud-based services.

    BibTex:

        @Article{OJBD_2015v1i2n04_UjjalMarjit,
            title     = {Data Transfers in Hadoop: A Comparative Study},
            author    = {Ujjal Marjit and
                         Kumar Sharma and
                         Puspendu Mandal},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {34--46},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194373},
            urn       = {urn:nbn:de:101:1-201705194373},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Hadoop is an open source framework for processing large amounts of data in distributed computing environment. It plays an important role in processing and analyzing the Big Data. This framework is used for storing data on large clusters of commodity hardware. Data input and output to and from Hadoop is an indispensable action for any data processing job. At present, many tools have been evolved for importing and exporting Data in Hadoop. In this article, some commonly used tools for importing and exporting data have been emphasized. Moreover, a state-of-the-art comparative study among the various tools has been made. With this study, it has been decided that where to use one tool over the other with emphasis on the data transfer to and from Hadoop system. This article also discusses about how Hadoop handles backup and disaster recovery along with some open research questions in terms of Big Data transfer when dealing with cloud-based services.}
        }
    
  23.  Open Access 

    Pattern-sensitive Time-series Anonymization and its Application to Energy-Consumption Data

    Stephan Kessler, Erik Buchmann, Thorben Burghardt, Klemens Böhm

    Open Journal of Information Systems (OJIS), 1(1), Pages 3-22, 2014, Downloads: 13268, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194696 | GNL-LP: 113236096X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Time series anonymization is an important problem. One prominent example of time series are energy consumption records, which might reveal details of the daily routine of a household. Existing privacy approaches for time series, e.g., from the field of trajectory anonymization, assume that every single value of a time series contains sensitive information and reduce the data quality very much. In contrast, we consider time series where it is combinations of tuples that represent personal information. We propose (n; l; k)-anonymity, geared to anonymization of time-series data with minimal information loss, assuming that an adversary may learn a few data points. We propose several heuristics to obtain (n; l; k)-anonymity, and we evaluate our approach both with synthetic and real data. Our experiments confirm that it is sufficient to modify time series only moderately in order to fulfill meaningful privacy requirements.

    BibTex:

        @Article{OJIS-v1i1n02_Kessler,
            title     = {Pattern-sensitive Time-series Anonymization and its Application to Energy-Consumption Data},
            author    = {Stephan Kessler and
                         Erik Buchmann and
                         Thorben Burghardt and
                         Klemens B{\"o}hm},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {3--22},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194696},
            urn       = {urn:nbn:de:101:1-201705194696},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Time series anonymization is an important problem. One prominent example of time series are energy consumption records, which might reveal details of the daily routine of a household. Existing privacy approaches for time series, e.g., from the field of trajectory anonymization, assume that every single value of a time series contains sensitive information and reduce the data quality very much. In contrast, we consider time series where it is combinations of tuples that represent personal information. We propose (n; l; k)-anonymity, geared to anonymization of time-series data with minimal information loss, assuming that an adversary may learn a few data points. We propose several heuristics to obtain (n; l; k)-anonymity, and we evaluate our approach both with synthetic and real data. Our experiments confirm that it is sufficient to modify time series only moderately in order to fulfill meaningful privacy requirements.}
        }
    
  24.  Open Access 

    Runtime Adaptive Hybrid Query Engine based on FPGAs

    Stefan Werner, Dennis Heinrich, Sven Groppe, Christopher Blochwitz, Thilo Pionteck

    Open Journal of Databases (OJDB), 3(1), Pages 21-41, 2016, Downloads: 13198, Citations: 4

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194645 | GNL-LP: 1132360900 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge.

    BibTex:

        @Article{OJDB_2016v3i1n02_Werner,
            title     = {Runtime Adaptive Hybrid Query Engine based on FPGAs},
            author    = {Stefan Werner and
                         Dennis Heinrich and
                         Sven Groppe and
                         Christopher Blochwitz and
                         Thilo Pionteck},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {21--41},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194645},
            urn       = {urn:nbn:de:101:1-201705194645},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge.	}
        }
    
  25.  Open Access 

    PatTrieSort - External String Sorting based on Patricia Tries

    Sven Groppe, Dennis Heinrich, Stefan Werner, Christopher Blochwitz, Thilo Pionteck

    Open Journal of Databases (OJDB), 2(1), Pages 36-50, 2015, Downloads: 13179, Citations: 1

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194627 | GNL-LP: 1132360889 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Presentation: Video

    Abstract: External merge sort belongs to the most efficient and widely used algorithms to sort big data: As much data as fits inside is sorted in main memory and afterwards swapped to external storage as so called initial run. After sorting all the data in this way block-wise, the initial runs are merged in a merging phase in order to retrieve the final sorted run containing the completely sorted original data. Patricia tries are one of the most space-efficient ways to store strings especially those with common prefixes. Hence, we propose to use patricia tries for initial run generation in an external merge sort variant, such that initial runs can become large compared to traditional external merge sort using the same main memory size. Furthermore, we store the initial runs as patricia tries instead of lists of sorted strings. As we will show in this paper, patricia tries can be efficiently merged having a superior performance in comparison to merging runs of sorted strings. We complete our discussion with a complexity analysis as well as a comprehensive performance evaluation, where our new approach outperforms traditional external merge sort by a factor of 4 for sorting over 4 billion strings of real world data.

    BibTex:

        @Article{OJDB_2015v2i1n03_Groppe,
            title     = {PatTrieSort - External String Sorting based on Patricia Tries},
            author    = {Sven Groppe and
                         Dennis Heinrich and
                         Stefan Werner and
                         Christopher Blochwitz and
                         Thilo Pionteck},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {36--50},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194627},
            urn       = {urn:nbn:de:101:1-201705194627},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {External merge sort belongs to the most efficient and widely used algorithms to sort big data: As much data as fits inside is sorted in main memory and afterwards swapped to external storage as so called initial run. After sorting all the data in this way block-wise, the initial runs are merged in a merging phase in order to retrieve the final sorted run containing the completely sorted original data. Patricia tries are one of the most space-efficient ways to store strings especially those with common prefixes. Hence, we propose to use patricia tries for initial run generation in an external merge sort variant, such that initial runs can become large compared to traditional external merge sort using the same main memory size. Furthermore, we store the initial runs as patricia tries instead of lists of sorted strings. As we will show in this paper, patricia tries can be efficiently merged having a superior performance in comparison to merging runs of sorted strings. We complete our discussion with a complexity analysis as well as a comprehensive performance evaluation, where our new approach outperforms traditional external merge sort by a factor of 4 for sorting over 4 billion strings of real world data.}
        }
    
  26.  Open Access 

    A Semantic Question Answering Framework for Large Data Sets

    Marta Tatu, Mithun Balakrishna, Steven Werner, Tatiana Erekhinskaya, Dan Moldovan

    Open Journal of Semantic Web (OJSW), 3(1), Pages 16-31, 2016, Downloads: 13116, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194921 | GNL-LP: 1132361338 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.

    BibTex:

        @Article{OJSW_2016v3i1n02_Tatu,
            title     = {A Semantic Question Answering Framework for Large Data Sets},
            author    = {Marta Tatu and
                         Mithun Balakrishna and
                         Steven Werner and
                         Tatiana Erekhinskaya and
                         Dan Moldovan},
            journal   = {Open Journal of Semantic Web (OJSW)},
            issn      = {2199-336X},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {16--31},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194921},
            urn       = {urn:nbn:de:101:1-201705194921},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.}
        }
    
  27.  Open Access 

    Semantic Blockchain to Improve Scalability in the Internet of Things

    Michele Ruta, Floriano Scioscia, Saverio Ieva, Giovanna Capurso, Eugenio Di Sciascio

    Open Journal of Internet Of Things (OJIOT), 3(1), Pages 46-61, 2017, Downloads: 13080, Citations: 48

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613488 | GNL-LP: 1137820225 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Generally scarce computational and memory resource availability is a well known problem for the IoT, whose intrinsic volatility makes complex applications unfeasible. Noteworthy efforts in overcoming unpredictability (particularly in case of large dimensions) are the ones integrating Knowledge Representation technologies to build the so-called Semantic Web of Things (SWoT). In spite of allowed advanced discovery features, transactions in the SWoT still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective: a semantic resource/service discovery layer built upon a basic blockchain infrastructure gains a consensus validation. This paper proposes a novel Service-Oriented Architecture (SOA) based on a semantic blockchain for registration, discovery, selection and payment. Such operations are implemented as smart contracts, allowing distributed execution and trust. Reported experiments early assess the sustainability of the proposal.

    BibTex:

        @Article{OJIOT_2017v3i1n05_Ruta,
            title     = {Semantic Blockchain to Improve Scalability in the Internet of Things},
            author    = {Michele Ruta and
                         Floriano Scioscia and
                         Saverio Ieva and
                         Giovanna Capurso and
                         Eugenio Di Sciascio},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2017},
            volume    = {3},
            number    = {1},
            pages     = {46--61},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613488},
            urn       = {urn:nbn:de:101:1-2017080613488},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Generally scarce computational and memory resource availability is a well known problem for the IoT, whose intrinsic volatility makes complex applications unfeasible. Noteworthy efforts in overcoming unpredictability (particularly in case of large dimensions) are the ones integrating Knowledge Representation technologies to build the so-called Semantic Web of Things (SWoT). In spite of allowed advanced discovery features, transactions in the SWoT still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective: a semantic resource/service discovery layer built upon a basic blockchain infrastructure gains a consensus validation. This paper proposes a novel Service-Oriented Architecture (SOA) based on a semantic blockchain for registration, discovery, selection and payment. Such operations are implemented as smart contracts, allowing distributed execution and trust. Reported experiments early assess the sustainability of the proposal.}
        }
    
  28.  Open Access 

    A Self-Optimizing Cloud Computing System for Distributed Storage and Processing of Semantic Web Data

    Sven Groppe, Johannes Blume, Dennis Heinrich, Stefan Werner

    Open Journal of Cloud Computing (OJCC), 1(2), Pages 1-14, 2014, Downloads: 12944, Citations: 2

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194478 | GNL-LP: 113236065X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Clouds are dynamic networks of common, off-the-shell computers to build computation farms. The rapid growth of databases in the context of the semantic web requires efficient ways to store and process this data. Using cloud technology for storing and processing Semantic Web data is an obvious way to overcome difficulties in storing and processing the enormously large present and future datasets of the Semantic Web. This paper presents a new approach for storing Semantic Web data, such that operations for the evaluation of Semantic Web queries are more likely to be processed only on local data, instead of using costly distributed operations. An experimental evaluation demonstrates the performance improvements in comparison to a naive distribution of Semantic Web data.

    BibTex:

        @Article{OJCC-v1i2n01_Groppe,
            title     = {A Self-Optimizing Cloud Computing System for Distributed Storage and Processing of Semantic Web Data},
            author    = {Sven Groppe and
                         Johannes Blume and
                         Dennis Heinrich and
                         Stefan Werner},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {1--14},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194478},
            urn       = {urn:nbn:de:101:1-201705194478},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Clouds are dynamic networks of common, off-the-shell computers to build computation farms. The rapid growth of databases in the context of the semantic web requires efficient ways to store and process this data. Using cloud technology for storing and processing Semantic Web data is an obvious way to overcome difficulties in storing and processing the enormously large present and future datasets of the Semantic Web. This paper presents a new approach for storing Semantic Web data, such that operations for the evaluation of Semantic Web queries are more likely to be processed only on local data, instead of using costly distributed operations. An experimental evaluation demonstrates the performance improvements in comparison to a naive distribution of Semantic Web data.}
        }
    
  29.  Open Access 

    Detecting Data-Flow Errors in BPMN 2.0

    Silvia von Stackelberg, Susanne Putze, Jutta Mülle, Klemens Böhm

    Open Journal of Information Systems (OJIS), 1(2), Pages 1-19, 2014, Downloads: 12912, Citations: 41

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017052611934 | GNL-LP: 1132836972 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Data-flow errors in BPMN 2.0 process models, such as missing or unused data, lead to undesired process executions. In particular, since BPMN 2.0 with a standardized execution semantics allows specifying alternatives for data as well as optional data, identifying missing or unused data systematically is difficult. In this paper, we propose an approach for detecting data-flow errors in BPMN 2.0 process models. We formalize BPMN process models by mapping them to Petri Nets and unfolding the execution semantics regarding data. We define a set of anti-patterns representing data-flow errors of BPMN 2.0 process models. By employing the anti-patterns, our tool performs model checking for the unfolded Petri Nets. The evaluation shows that it detects all data-flow errors identified by hand, and so improves process quality.

    BibTex:

        @Article{OJIS-2014v1i2n01_Stackelberg,
            title     = {Detecting Data-Flow Errors in BPMN 2.0},
            author    = {Silvia von Stackelberg and
                         Susanne Putze and
                         Jutta M{\"u}lle and
                         Klemens B{\"o}hm},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {1--19},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017052611934},
            urn       = {urn:nbn:de:101:1-2017052611934},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Data-flow errors in BPMN 2.0 process models, such as missing or unused data, lead to undesired process executions. In particular, since BPMN 2.0 with a standardized execution semantics allows specifying alternatives for data as well as optional data, identifying missing or unused data systematically is difficult. In this paper, we propose an approach for detecting data-flow errors in BPMN 2.0 process models. We formalize BPMN process models by mapping them to Petri Nets and unfolding the execution semantics regarding data. We define a set of anti-patterns representing data-flow errors of BPMN 2.0 process models. By employing the anti-patterns, our tool performs model checking for the unfolded Petri Nets. The evaluation shows that it detects all data-flow errors identified by hand, and so improves process quality.}
        }
    
  30.  Open Access 

    A 24 GHz FM-CW Radar System for Detecting Closed Multiple Targets and Its Applications in Actual Scenes

    Kazuhiro Yamaguchi, Mitumasa Saito, Takuya Akiyama, Tomohiro Kobayashi, Naoki Ginoza, Hideaki Matsue

    Open Journal of Internet Of Things (OJIOT), 2(1), Pages 1-15, 2016, Downloads: 12821, Citations: 3

    Full-Text: pdf | URN: urn:nbn:de:101:1-201704245003 | GNL-LP: 1130623858 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper develops a 24 GHz band FM-CW radar system to detect closed multiple targets in a small displacement environment, and its performance is analyzed by computer simulation. The FM-CW radar system uses a differential detection method for removing any signals from background objects and uses a tunable FIR filtering in signal processing for detecting multiple targets. The differential detection method enables the correct detection of both the distance and small displacement at the same time for each target at the FM-CW radar according to the received signals. The basic performance of the FM-CW radar system is analyzed by computer simulation, and the distance and small displacement of a single target are measured in field experiments. The computer simulations are carried out for evaluating the proposed detection method with tunable FIR filtering for the FM-CW radar and for analyzing the performance according to the parameters in a closed multiple targets environment. The results of simulation show that our 24 GHz band FM-CW radar with the proposed detection method can effectively detect both the distance and the small displacement for each target in multiple moving targets environments. Moreover, we develop an IoT-based application for monitoring several targets at the same time in actual scenes.

    BibTex:

        @Article{OJIOT_2016v2i1n02_Yamaguchi,
            title     = {A 24 GHz FM-CW Radar System for Detecting Closed Multiple Targets and Its Applications in Actual Scenes},
            author    = {Kazuhiro Yamaguchi and
                         Mitumasa Saito and
                         Takuya Akiyama and
                         Tomohiro Kobayashi and
                         Naoki Ginoza and
                         Hideaki Matsue},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2016},
            volume    = {2},
            number    = {1},
            pages     = {1--15},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704245003},
            urn       = {urn:nbn:de:101:1-201704245003},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper develops a 24 GHz band FM-CW radar system to detect closed multiple targets in a small displacement environment, and its performance is analyzed by computer simulation. The FM-CW radar system uses a differential detection method for removing any signals from background objects and uses a tunable FIR filtering in signal processing for detecting multiple targets. The differential detection method enables the correct detection of both the distance and small displacement at the same time for each target at the FM-CW radar according to the received signals. The basic performance of the FM-CW radar system is analyzed by computer simulation, and the distance and small displacement of a single target are measured in field experiments. The computer simulations are carried out for evaluating the proposed detection method with tunable FIR filtering for the FM-CW radar and for analyzing the performance according to the parameters in a closed multiple targets environment. The results of simulation show that our 24 GHz band FM-CW radar with the proposed detection method can effectively detect both the distance and the small displacement for each target in multiple moving targets environments. Moreover, we develop an IoT-based application for monitoring several targets at the same time in actual scenes.}
        }
    
  31.  Open Access 

    Definition and Categorization of Dew Computing

    Yingwei Wang

    Open Journal of Cloud Computing (OJCC), 3(1), Pages 1-7, 2016, Downloads: 12770, Citations: 69

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194546 | GNL-LP: 1132360781 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications.

    BibTex:

        @Article{OJCC_2016v3i1n02_YingweiWang,
            title     = {Definition and Categorization of Dew Computing},
            author    = {Yingwei Wang},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {1--7},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194546},
            urn       = {urn:nbn:de:101:1-201705194546},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications.}
        }
    
  32.  Open Access 

    Evidential Sensor Data Fusion in a Smart City Environment

    Aditya Gaur, Bryan W. Scotney, Gerard P. Parr, Sally I. McClean

    Open Journal of Internet Of Things (OJIOT), 1(2), Pages 1-18, 2015, Downloads: 12655, Citations: 2

    Full-Text: pdf | URN: urn:nbn:de:101:1-201704244969 | GNL-LP: 113062319X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Wireless sensor networks have increasingly become contributors of very large amounts of data. The recent deployment of wireless sensor networks in Smart City infrastructures have led to very large amounts of data being generated each day across a variety of domains, with applications including environmental monitoring, healthcare monitoring and transport monitoring. The information generated through the wireless sensor nodes has made possible the visualization of a Smart City environment for better living. The Smart City offers intelligent infrastructure and cogitative environment for the elderly and other people living in the Smart society. Different types of sensors are present that help in monitoring inhabitants' behaviour and their interaction with real world objects. To take advantage of the increasing amounts of data, there is a need for new methods and techniques for effective data management and analysis, to generate information that can assist in managing the resources intelligently and dynamically. Through this research a Smart City ontology model is proposed, which addresses the fusion process related to uncertain sensor data using semantic web technologies and Dempster-Shafer uncertainty theory. Based on the information handling methods, such as Dempster-Shafer theory (DST), an equally weighted sum operator and maximization operation, a higher level of contextual information is inferred from the low-level sensor data fusion process. In addition, the proposed ontology model helps in learning new rules that can be used in defining new knowledge in the Smart City system.

    BibTex:

        @Article{OJIOT_2015v1i2n02_Gaur,
            title     = {Evidential Sensor Data Fusion in a Smart City Environment},
            author    = {Aditya Gaur and
                         Bryan W. Scotney and
                         Gerard P. Parr and
                         Sally I. McClean},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {1--18},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244969},
            urn       = {urn:nbn:de:101:1-201704244969},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Wireless sensor networks have increasingly become contributors of very large amounts of data. The recent deployment of wireless sensor networks in Smart City infrastructures have led to very large amounts of data being generated each day across a variety of domains, with applications including environmental monitoring, healthcare monitoring and transport monitoring. The information generated through the wireless sensor nodes has made possible the visualization of a Smart City environment for better living. The Smart City offers intelligent infrastructure and cogitative environment for the elderly and other people living in the Smart society. Different types of sensors are present that help in monitoring inhabitants' behaviour and their interaction with real world objects. To take advantage of the increasing amounts of data, there is a need for new methods and techniques for effective data management and analysis, to generate information that can assist in managing the resources intelligently and dynamically. Through this research a Smart City ontology model is proposed, which addresses the fusion process related to uncertain sensor data using semantic web technologies and Dempster-Shafer uncertainty theory. Based on the information handling methods, such as Dempster-Shafer theory (DST), an equally weighted sum operator and maximization operation, a higher level of contextual information is inferred from the low-level sensor data fusion process. In addition, the proposed ontology model helps in learning new rules that can be used in defining new knowledge in the Smart City system.}
        }
    
  33.  Open Access 

    Big Data in the Cloud: A Survey

    Pedro Caldeira Neves, Jorge Bernardino

    Open Journal of Big Data (OJBD), 1(2), Pages 1-18, 2015, Downloads: 12474, Citations: 14

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194365 | GNL-LP: 1132360528 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future.

    BibTex:

        @Article{OJBD_2015v1i2n02_Neves,
            title     = {Big Data in the Cloud: A Survey},
            author    = {Pedro Caldeira Neves and
                         Jorge Bernardino},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {1--18},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194365},
            urn       = {urn:nbn:de:101:1-201705194365},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future.}
        }
    
  34.  Open Access 

    Block-level De-duplication with Encrypted Data

    Pasquale Puzio, Refik Molva, Melek Önen, Sergio Loureiro

    Open Journal of Cloud Computing (OJCC), 1(1), Pages 10-18, 2014, Downloads: 12453, Citations: 20

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194448 | GNL-LP: 1132360617 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Deduplication is a storage saving technique which has been adopted by many cloud storage providers such as Dropbox. The simple principle of deduplication is that duplicate data uploaded by different users are stored only once. Unfortunately, deduplication is not compatible with encryption. As a scheme that allows deduplication of encrypted data segments, we propose ClouDedup, a secure and efficient storage service which guarantees blocklevel deduplication and data confidentiality at the same time. ClouDedup strengthens convergent encryption by employing a component that implements an additional encryption operation and an access control mechanism. We also propose to introduce an additional component which is in charge of providing a key management system for data blocks together with the actual deduplication operation. We show that the overhead introduced by these new components is minimal and does not impact the overall storage and computational costs.

    BibTex:

        @Article{OJCC-v1i1n02_Puzio,
            title     = {Block-level De-duplication with Encrypted Data},
            author    = {Pasquale Puzio and
                         Refik Molva and
                         Melek {\"O}nen and
                         Sergio Loureiro},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {10--18},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194448},
            urn       = {urn:nbn:de:101:1-201705194448},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Deduplication is a storage saving technique which has been adopted by many cloud storage providers such as Dropbox. The simple principle of deduplication is that duplicate data uploaded by different users are stored only once. Unfortunately, deduplication is not compatible with encryption. As a scheme that allows deduplication of encrypted data segments, we propose ClouDedup, a secure and efficient storage service which guarantees blocklevel deduplication and data confidentiality at the same time. ClouDedup strengthens convergent encryption by employing a component that implements an additional encryption operation and an access control mechanism. We also propose to introduce an additional component which is in charge of providing a key management system for data blocks together with the actual deduplication operation. We show that the overhead introduced by these new components is minimal and does not impact the overall storage and computational costs.}
        }
    
  35.  Open Access 

    Modelling the Integrated QoS for Wireless Sensor Networks with Heterogeneous Data Traffic

    Syarifah Ezdiani, Adnan Al-Anbuky

    Open Journal of Internet Of Things (OJIOT), 1(1), Pages 1-15, 2015, Downloads: 12112, Citations: 17

    Full-Text: pdf | URN: urn:nbn:de:101:1-201704244946 | GNL-LP: 1130621979 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The future of Internet of Things (IoT) is envisaged to consist of a high amount of wireless resource-constrained devices connected to the Internet. Moreover, a lot of novel real-world services offered by IoT devices are realized by wireless sensor networks (WSNs). Integrating WSN to the Internet has therefore brought forward the requirements of an end-to-end quality of service (QoS) guarantee. In this paper, the QoS requirements for the WSN-Internet integration are investigated by first distinguishing the Internet QoS from the WSN QoS. Next, this study emphasizes on WSN applications that involve traffic with different levels of importance, thus the way realtime traffic and delay-tolerant traffic are handled to guarantee QoS in the network is studied. Additionally, an overview of the integration strategies is given, and the delay-tolerant network (DTN) gateway, being one of the desirable approaches for integrating WSNs to the Internet, is discussed. Next, the implementation of the service model is presented, by considering both traffic prioritization and service differentiation. Based on the simulation results in OPNET Modeler, it is observed that real-time traffic achieve low bound delay while delay-tolerant traffic experience a lower packet dropped, hence indicating that the needs of real-time and delay-tolerant traffic can be better met by treating both packet types differently. Furthermore, a vehicular network is used as an example case to describe the applicability of the framework in a real IoT application environment, followed by a discussion on the future work of this research.

    BibTex:

        @Article{OJIOT_2015v1i1n02_Syarifah,
            title     = {Modelling the Integrated QoS for Wireless Sensor Networks with Heterogeneous Data Traffic},
            author    = {Syarifah Ezdiani and
                         Adnan Al-Anbuky},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2015},
            volume    = {1},
            number    = {1},
            pages     = {1--15},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244946},
            urn       = {urn:nbn:de:101:1-201704244946},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The future of Internet of Things (IoT) is envisaged to consist of a high amount of wireless resource-constrained devices connected to the Internet. Moreover, a lot of novel real-world services offered by IoT devices are realized by wireless sensor networks (WSNs). Integrating WSN to the Internet has therefore brought forward the requirements of an end-to-end quality of service (QoS) guarantee. In this paper, the QoS requirements for the WSN-Internet integration are investigated by first distinguishing the Internet QoS from the WSN QoS. Next, this study emphasizes on WSN applications that involve traffic with different levels of importance, thus the way realtime traffic and delay-tolerant traffic are handled to guarantee QoS in the network is studied. Additionally, an overview of the integration strategies is given, and the delay-tolerant network (DTN) gateway, being one of the desirable approaches for integrating WSNs to the Internet, is discussed. Next, the implementation of the service model is presented, by considering both traffic prioritization and service differentiation. Based on the simulation results in OPNET Modeler, it is observed that real-time traffic achieve low bound delay while delay-tolerant traffic experience a lower packet dropped, hence indicating that the needs of real-time and delay-tolerant traffic can be better met by treating both packet types differently. Furthermore, a vehicular network is used as an example case to describe the applicability of the framework in a real IoT application environment, followed by a discussion on the future work of this research.}
        }
    
  36.  Open Access 

    Deriving Bounds on the Size of Spatial Areas

    Erik Buchmann, Patrick Erik Bradley, Klemens Böhm

    Open Journal of Databases (OJDB), 2(1), Pages 1-16, 2015, Downloads: 11928

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194566 | GNL-LP: 113236082X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Many application domains such as surveillance, environmental monitoring or sensor-data processing need upper and lower bounds on areas that are covered by a certain feature. For example, a smart-city infrastructure might need bounds on the size of an area polluted with fine-dust, to re-route combustion-engine traffic. Obtaining such bounds is challenging, because in almost any real-world application, information about the region of interest is incomplete, e.g., the database of sensor data contains only a limited number of samples. Existing approaches cannot provide upper and lower bounds or depend on restrictive assumptions, e.g., the area must be convex. Our approach in turn is based on the natural assumption that it is possible to specify a minimal diameter for the feature in question. Given this assumption, we formally derive bounds on the area size, and we provide algorithms that compute these bounds from a database of sensor data, based on geometrical considerations. We evaluate our algorithms both with a real-world case study and with synthetic data.

    BibTex:

        @Article{OJDB-2015v2i1n01_Buchmann,
            title     = {Deriving Bounds on the Size of Spatial Areas},
            author    = {Erik Buchmann and
                         Patrick Erik Bradley and
                         Klemens B{\"o}hm},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {1--16},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194566},
            urn       = {urn:nbn:de:101:1-201705194566},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Many application domains such as surveillance, environmental monitoring or sensor-data processing need upper and lower bounds on areas that are covered by a certain feature. For example, a smart-city infrastructure might need bounds on the size of an area polluted with fine-dust, to re-route combustion-engine traffic. Obtaining such bounds is challenging, because in almost any real-world application, information about the region of interest is incomplete, e.g., the database of sensor data contains only a limited number of samples. Existing approaches cannot provide upper and lower bounds or depend on restrictive assumptions, e.g., the area must be convex. Our approach in turn is based on the natural assumption that it is possible to specify a minimal diameter for the feature in question. Given this assumption, we formally derive bounds on the area size, and we provide algorithms that compute these bounds from a database of sensor data, based on geometrical considerations. We evaluate our algorithms both with a real-world case study and with synthetic data.}
        }
    
  37.  Open Access 

    Using Business Intelligence to Improve DBA Productivity

    Eric A. Mortensen, En Cheng

    Open Journal of Databases (OJDB), 1(2), Pages 1-16, 2014, Downloads: 11519

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194595 | GNL-LP: 1132360854 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The amount of data collected and used by companies has grown rapidly in size over the last decade. Business leaders are now using Business Intelligence (BI) systems to make effective business decisions against large amounts of data. The growth in the size of data has been a major challenge for Database Administrators (DBAs). The increase in the number and size of databases at the speed they have grown has made it difficult for DBA teams to provide the same level of service that the business requires they provide. The methods that DBAs have used in the last several decades can no longer be performed with the efficiency needed over all of the databases they administer. This paper presents the first BI system to improve DBA productivity and providing important data metrics for Information Technology (IT) managers. The BI system has been well received by Sherwin Williams Database Administrators. It has i) enabled the DBA team to quickly determine which databases needed work by a DBA without manually logging into the system; ii) helped the DBA team and its management to easily answer other business users' questions without using DBAs' time to research the issue; and iii) helped the DBA team to provide the business data for unanticipated audit request.

    BibTex:

        @Article{OJDB-v1i2n01_Mortensen,
            title     = {Using Business Intelligence to Improve DBA Productivity},
            author    = {Eric A. Mortensen and
                         En Cheng},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {1--16},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194595},
            urn       = {urn:nbn:de:101:1-201705194595},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The amount of data collected and used by companies has grown rapidly in size over the last decade.  Business leaders are now using Business Intelligence (BI) systems to make effective business decisions against large amounts of data. The growth in the size of data has been a major challenge for Database Administrators (DBAs). The increase in the number and size of databases at the speed they have grown has made it difficult for DBA teams to provide the same level of service that the business requires they provide. The methods that DBAs have used in the last several decades can no longer be performed with the efficiency needed over all of the databases they administer. This paper presents the first BI system to improve DBA productivity and providing important data metrics for Information Technology (IT) managers. The BI system has been well received by Sherwin Williams Database Administrators.  It has i) enabled the DBA team to quickly determine which databases needed work by a DBA without manually logging into the system; ii) helped the DBA team and its management to easily answer other business users' questions without using DBAs' time to research the issue; and iii) helped the DBA team to provide the business data for unanticipated audit request.}
        }
    
  38.  Open Access 

    Integrating Human Factors and Semantic Mark-ups in Adaptive Interactive Systems

    Marios Belk, Panagiotis Germanakos, Efi Papatheocharous, Panayiotis Andreou, George Samaras

    Open Journal of Web Technologies (OJWT), 1(1), Pages 15-26, 2014, Downloads: 11378, Citations: 1

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017052611313 | GNL-LP: 113283600X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper focuses on incorporating individual differences in cognitive processing and semantic mark-ups in the context of adaptive interactive systems. In particular, a semantic Web-based adaptation framework is proposed that enables Web content providers to enrich content and functionality of Web environments with semantic mark-ups. The Web content is created using a Web authoring tool and is further processed and reconstructed by an adaptation mechanism based on cognitive factors of users. Main aim of this work is to investigate the added value of personalising content and functionality of Web environments based on the unique cognitive characteristics of users. Accordingly, a user study has been conducted that entailed a psychometric-based survey for extracting the users' cognitive characteristics, combined with a real usage scenario of an existing commercial Web environment that was enriched with semantic mark-ups and personalised based on different adaptation effects. The paper provides interesting insights in the design and development of adaptive interactive systems based on cognitive factors and semantic mark-ups.

    BibTex:

        @Article{OJWT-v1i1n02_Belk,
            title     = {Integrating Human Factors and Semantic Mark-ups in Adaptive Interactive Systems},
            author    = {Marios Belk and
                         Panagiotis Germanakos and
                         Efi Papatheocharous and
                         Panayiotis Andreou and
                         George Samaras},
            journal   = {Open Journal of Web Technologies (OJWT)},
            issn      = {2199-188X},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {15--26},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017052611313},
            urn       = {urn:nbn:de:101:1-2017052611313},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper focuses on incorporating individual differences in cognitive processing and semantic mark-ups in the context of adaptive interactive systems. In particular, a semantic Web-based adaptation framework is proposed that enables Web content providers to enrich content and functionality of Web environments with semantic mark-ups. The Web content is created using a Web authoring tool and is further processed and reconstructed by an adaptation mechanism based on cognitive factors of users. Main aim of this work is to investigate the added value of personalising content and functionality of Web environments based on the unique cognitive characteristics of users. Accordingly, a user study has been conducted that entailed a psychometric-based survey for extracting the users' cognitive characteristics, combined with a real usage scenario of an existing commercial Web environment that was enriched with semantic mark-ups and personalised based on different adaptation effects. The paper provides interesting insights in the design and development of adaptive interactive systems based on cognitive factors and semantic mark-ups.}
        }
    
  39.  Open Access 

    MapReduce-based Solutions for Scalable SPARQL Querying

    José M. Giménez-Garcia, Javier D. Fernández, Miguel A. Martínez-Prieto

    Open Journal of Semantic Web (OJSW), 1(1), Pages 1-18, 2014, Downloads: 11267, Citations: 10

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194824 | GNL-LP: 1132361168 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The use of RDF to expose semantic data on the Web has seen a dramatic increase over the last few years. Nowadays, RDF datasets are so big and rconnected that, in fact, classical mono-node solutions present significant scalability problems when trying to manage big semantic data. MapReduce, a standard framework for distributed processing of great quantities of data, is earning a place among the distributed solutions facing RDF scalability issues. In this article, we survey the most important works addressing RDF management and querying through diverse MapReduce approaches, with a focus on their main strategies, optimizations and results.

    BibTex:

        @Article{OJSW-v1i1n02_Garcia,
            title     = {MapReduce-based Solutions for Scalable SPARQL Querying},
            author    = {Jos\'{e} M. Gim\'{e}nez-Garcia and
                         Javier D. Fern\'{a}ndez and
                         Miguel A. Mart\'{i}nez-Prieto},
            journal   = {Open Journal of Semantic Web (OJSW)},
            issn      = {2199-336X},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {1--18},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194824},
            urn       = {urn:nbn:de:101:1-201705194824},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The use of RDF to expose semantic data on the Web has seen a dramatic increase over the last few years. Nowadays, RDF datasets are so big and rconnected that, in fact, classical mono-node solutions present significant scalability problems when trying to manage big semantic data. MapReduce, a standard framework for distributed processing of great quantities of data, is earning a place among the distributed solutions facing RDF scalability issues. In this article, we survey the most important works addressing RDF management and querying through diverse MapReduce approaches, with a focus on their main strategies, optimizations and results.}
        }
    
  40.  Open Access 

    Ontology-Based Data Access to Big Data

    Simon Schiff, Ralf Möller, Özgür L. Özcep

    Open Journal of Databases (OJDB), 6(1), Pages 21-32, 2019, Downloads: 11202, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018122318334350985847 | GNL-LP: 1174122730 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Recent approaches to ontology-based data access (OBDA) have extended the focus from relational database systems to other types of backends such as cluster frameworks in order to cope with the four Vs associated with big data: volume, veracity, variety and velocity (stream processing). The abstraction that an ontology provides is a benefit from the enduser point of view, but it represents a challenge for developers because high-level queries must be transformed into queries executable on the backend level. In this paper, we discuss and evaluate an OBDA system that uses STARQL (Streaming and Temporal ontology Access with a Reasoning-based Query Language), as a high-level query language to access data stored in a SPARK cluster framework. The development of the STARQL-SPARK engine show that there is a need to provide a homogeneous interface to access both static and temporal as well as streaming data because cluster frameworks usually lack such an interface. The experimental evaluation shows that building a scalable OBDA system that runs with SPARK is more than plug-and-play as one needs to know quite well the data formats and the data organisation in the cluster framework.

    BibTex:

        @Article{OJDB_2019v6i1n03_Schiff,
            title     = {Ontology-Based Data Access to Big Data},
            author    = {Simon Schiff and
                         Ralf M{\"o}ller and
                         {\"O}zg{\"u}r L. {\"O}zcep},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2019},
            volume    = {6},
            number    = {1},
            pages     = {21--32},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018122318334350985847},
            urn       = {urn:nbn:de:101:1-2018122318334350985847},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Recent approaches to ontology-based data access (OBDA) have extended the focus from relational database systems to other types of backends such as cluster frameworks in order to cope with the four Vs associated with big data: volume, veracity, variety and velocity (stream processing). The abstraction that an ontology provides is a benefit from the enduser point of view, but it represents a challenge for developers because high-level queries must be transformed into queries executable on the backend level. In this paper, we discuss and evaluate an OBDA system that uses STARQL (Streaming and Temporal ontology Access with a Reasoning-based Query Language), as a high-level query language to access data stored in a SPARK cluster framework. The development of the STARQL-SPARK engine show that there is a need to provide a homogeneous interface to access both static and temporal as well as streaming data because cluster frameworks usually lack such an interface. The experimental evaluation shows that building a scalable OBDA system that runs with SPARK is more than plug-and-play as one needs to know quite well the data formats and the data organisation in the cluster framework.}
        }
    
  41.  Open Access 

    SIWeb: understanding the Interests of the Society through Web data Analysis

    Marco Furini, Simone Montangero

    Open Journal of Web Technologies (OJWT), 1(1), Pages 1-14, 2014, Downloads: 11125, Citations: 4

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705291334 | GNL-LP: 1133021522 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The high availability of user-generated contents in the Web scenario represents a tremendous asset for understanding various social phenomena. Methods and commercial products that exploit the widespread use of the Web as a way of conveying personal opinions have been proposed, but a critical thinking is that these approaches may produce a partial, or distorted, understanding of the society, because most of them focus on definite scenarios, use specific platforms, base their analysis on the sole magnitude of data, or treat the different Web resources with the same importance. In this paper, we present SIWeb (Social Interests through Web Analysis), a novel mechanism designed to measure the interest the society has on a topic (e.g., a real world phenomenon, an event, a person, a thing). SIWeb is general purpose (it can be applied to any decision making process), cross platforms (it uses the entire Webspace, from social media to websites, from tags to reviews), and time effective (it measures the time correlatio between the Web resources). It uses fractal analysis to detect the temporal relations behind all the Web resources (e.g., Web pages, RSS, newsgroups, etc.) that talk about a topic and combines this number with the temporal relations to give an insight of the the interest the society has about a topic. The evaluation of the proposal shows that SIWeb might be helpful in decision making processes as it reflects the interests the society has on a specific topic.

    BibTex:

        @Article{OJWT-v1i1n01_Furini,
            title     = {SIWeb: understanding the Interests of the Society through Web data Analysis},
            author    = {Marco Furini and
                         Simone Montangero},
            journal   = {Open Journal of Web Technologies (OJWT)},
            issn      = {2199-188X},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {1--14},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291334},
            urn       = {urn:nbn:de:101:1-201705291334},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The high availability of user-generated contents in the Web scenario represents a tremendous asset for understanding various social phenomena. Methods and commercial products that exploit the widespread use of the Web as a way of conveying personal opinions have been proposed, but a critical thinking is that these approaches may produce a partial, or distorted, understanding of the society, because most of them focus on definite scenarios, use specific platforms, base their analysis on the sole magnitude of data, or treat the different Web resources with the same importance. In this paper, we present SIWeb (Social Interests through Web Analysis), a novel mechanism designed to measure the interest the society has on a topic (e.g., a real world phenomenon, an event, a person, a thing). SIWeb is general purpose (it can be applied to any decision making process), cross platforms (it uses the entire Webspace, from social media to websites, from tags to reviews), and time effective (it measures the time correlatio between the Web resources). It uses fractal analysis to detect the temporal relations behind all the Web resources (e.g., Web pages, RSS, newsgroups, etc.) that talk about a topic and combines this number with the temporal relations to give an insight of the the interest the society has about a topic. The evaluation of the proposal shows that SIWeb might be helpful in decision making processes as it reflects the interests the society has on a specific topic.}
        }
    
  42.  Open Access 

    Perceived Sociability of Use and Individual Use of Social Networking Sites - A Field Study of Facebook Use in the Arctic

    Juhani Iivari

    Open Journal of Information Systems (OJIS), 1(1), Pages 23-53, 2014, Downloads: 11095, Citations: 9

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194708 | GNL-LP: 1132360978 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper investigates determinants of individual use of social network sites (SNSs). It introduces a new construct, Perceived Sociability of Use (PSOU), to explain the use of such computer mediated communication applications. Based on a field study of 113 Facebook users it shows that PSOU in the sense of maintaining social contacts is a significant predictor of Perceived Benefits (PB), Perceived Enjoyment (PE), attitude toward use and intention to use. Inspired by Benbasat and Barki, this paper also attempts to answer questions "what makes the system useful", "what makes the system enjoyable to use" and "what makes the system sociable to use". As a consequence it pays special focus on systems characteristics of IT applications as potential predictors of PSOU, PB and PE, introducing seven such designable qualities (user-to-user interactivity, user identifiability, system quality, information quality, usability, user-to-system interactivity, and aesthetics). The results indicate that especially satisfaction with user-to-user interactivity is a significant determinant of PSOU, and that satisfactions with six of these seven designable qualities have significant paths in the proposed nomological network.

    BibTex:

        @Article{OJIS-v1i1n03_Iivari,
            title     = {Perceived Sociability of Use and Individual Use of Social Networking Sites - A Field Study of Facebook Use in the Arctic},
            author    = {Juhani Iivari},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {23--53},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194708},
            urn       = {urn:nbn:de:101:1-201705194708},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper investigates determinants of individual use of social network sites (SNSs). It introduces a new construct, Perceived Sociability of Use (PSOU), to explain the use of such computer mediated communication applications. Based on a field study of 113 Facebook users it shows that PSOU in the sense of maintaining social contacts is a significant predictor of Perceived Benefits (PB), Perceived Enjoyment (PE), attitude toward use and intention to use. Inspired by Benbasat and Barki, this paper also attempts to answer questions "what makes the system useful", "what makes the system enjoyable to use" and "what makes the system sociable to use". As a consequence it pays special focus on systems characteristics of IT applications as potential predictors of PSOU, PB and PE, introducing seven such designable qualities (user-to-user interactivity, user identifiability, system quality, information quality, usability, user-to-system interactivity, and aesthetics). The results indicate that especially satisfaction with user-to-user interactivity is a significant determinant of PSOU, and that satisfactions with six of these seven designable qualities have significant paths in the proposed nomological network.}
        }
    
  43.  Open Access 

    Distributed Join Approaches for W3C-Conform SPARQL Endpoints

    Sven Groppe, Dennis Heinrich, Stefan Werner

    Open Journal of Semantic Web (OJSW), 2(1), Pages 30-52, 2015, Downloads: 11089, Citations: 6

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194910 | GNL-LP: 1132361303 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Presentation: Video

    Abstract: Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation.

    BibTex:

        @Article{OJSW_2015v2i1n04_Groppe,
            title     = {Distributed Join Approaches for W3C-Conform SPARQL Endpoints},
            author    = {Sven Groppe and
                         Dennis Heinrich and
                         Stefan Werner},
            journal   = {Open Journal of Semantic Web (OJSW)},
            issn      = {2199-336X},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {30--52},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194910},
            urn       = {urn:nbn:de:101:1-201705194910},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation.}
        }
    
  44.  Open Access 

    An Analytical Model of Multi-Core Multi-Cluster Architecture (MCMCA)

    Norhazlina Hamid, Robert John Walters, Gary B. Wills

    Open Journal of Cloud Computing (OJCC), 2(1), Pages 4-15, 2015, Downloads: 10939, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194487 | GNL-LP: 1132360692 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Multi-core clusters have emerged as an important contribution in computing technology for provisioning additional processing power in high performance computing and communications. Multi-core architectures are proposed for their capability to provide higher performance without increasing heat and power usage, which is the main concern in a single-core processor. This paper introduces analytical models of a new architecture for large-scale multi-core clusters to improve the communication performance within the interconnection network. The new architecture will be based on a multi - cluster architecture containing clusters of multi-core processors.

    BibTex:

        @Article{OJCC_2015v2i1n02_Hamid,
            title     = {An Analytical Model of Multi-Core Multi-Cluster Architecture (MCMCA)},
            author    = {Norhazlina Hamid and
                         Robert John Walters and
                         Gary B. Wills},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {4--15},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194487},
            urn       = {urn:nbn:de:101:1-201705194487},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Multi-core clusters have emerged as an important contribution in computing technology for provisioning additional processing power in high performance computing and communications. Multi-core architectures are proposed for their capability to provide higher performance without increasing heat and power usage, which is the main concern in a single-core processor. This paper introduces analytical models of a new architecture for large-scale multi-core clusters to improve the communication performance within the interconnection network. The new architecture will be based on a multi - cluster architecture containing clusters of multi-core processors.}
        }
    
  45.  Open Access 

    Cyber Supply Chain Risks in Cloud Computing - Bridging the Risk Assessment Gap

    Olusola Akinrolabu, Steve New, Andrew Martin

    Open Journal of Cloud Computing (OJCC), 5(1), Pages 1-19, 2018, Downloads: 10793

    Full-Text: pdf | URN: urn:nbn:de:101:1-201712245432 | GNL-LP: 1149497157 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Cloud computing represents a significant paradigm shift in the delivery of information technology (IT) services. The rapid growth of the cloud and the increasing security concerns associated with the delivery of cloud services has led many researchers to study cloud risks and risk assessments. Some of these studies highlight the inability of current risk assessments to cope with the dynamic nature of the cloud, a gap we believe is as a result of the lack of consideration for the inherent risk of the supply chain. This paper, therefore, describes the cloud supply chain and investigates the effect of supply chain transparency in conducting a comprehensive risk assessment. We conducted an industry survey to gauge stakeholder awareness of supply chain risks, seeking to find out the risk assessment methods commonly used, factors that hindered a comprehensive evaluation and how the current state-of-the-art can be improved. The analysis of the survey dataset showed the lack of flexibility of the popular qualitative assessment methods in coping with the risks associated with the dynamic supply chain of cloud services, typically made up of an average of eight suppliers. To address these gaps, we propose a Cloud Supply Chain Cyber Risk Assessment (CSCCRA) model, a quantitative risk assessment model which is supported by decision support analysis and supply chain mapping in the identification, analysis and evaluation of cloud risks.

    BibTex:

        @Article{OJCC_2018v5i1n01_Akinrolabu,
            title     = {Cyber Supply Chain Risks in Cloud Computing - Bridging the Risk Assessment Gap},
            author    = {Olusola Akinrolabu and
                         Steve New and
                         Andrew Martin},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2018},
            volume    = {5},
            number    = {1},
            pages     = {1--19},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201712245432},
            urn       = {urn:nbn:de:101:1-201712245432},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Cloud computing represents a significant paradigm shift in the delivery of information technology (IT) services. The rapid growth of the cloud and the increasing security concerns associated with the delivery of cloud services has led many researchers to study cloud risks and risk assessments. Some of these studies highlight the inability of current risk assessments to cope with the dynamic nature of the cloud, a gap we believe is as a result of the lack of consideration for the inherent risk of the supply chain. This paper, therefore, describes the cloud supply chain and investigates the effect of supply chain transparency in conducting a comprehensive risk assessment. We conducted an industry survey to gauge stakeholder awareness of supply chain risks, seeking to find out the risk assessment methods commonly used, factors that hindered a comprehensive evaluation and how the current state-of-the-art can be improved. The analysis of the survey dataset showed the lack of flexibility of the popular qualitative assessment methods in coping with the risks associated with the dynamic supply chain of cloud services, typically made up of an average of eight suppliers. To address these gaps, we propose a Cloud Supply Chain Cyber Risk Assessment (CSCCRA) model, a quantitative risk assessment model which is supported by decision support analysis and supply chain mapping in the identification, analysis and evaluation of cloud risks.}
        }
    
  46.  Open Access 

    Using Nuisance Telephone Denial of Service to Combat Online Sex Trafficking

    Ross A. Malaga

    Open Journal of Information Systems (OJIS), 2(1), Pages 1-8, 2015, Downloads: 10790, Citations: 1

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194736 | GNL-LP: 1132361036 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Over the past few years, sex trafficking has been linked to online classified ads sites such as Craigslist.com and Backpage.com. However, to date technology-based solutions have not been used to attack classified ad sites or the advertisers. This paper proposes and tests a new approach to combating online sex trafficking promulgated via online classified ad sites - nuisance telephone denial of service (TDoS) attacks on the advertisers. The method of attack is described and implications are discussed.

    BibTex:

        @Article{OJIS_2015v2i1n01_Malaga,
            title     = {Using Nuisance Telephone Denial of Service to Combat Online Sex Trafficking},
            author    = {Ross A. Malaga},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {1--8},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194736},
            urn       = {urn:nbn:de:101:1-201705194736},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Over the past few years, sex trafficking has been linked to online classified ads sites such as Craigslist.com and Backpage.com. However, to date technology-based solutions have not been used to attack classified ad sites or the advertisers. This paper proposes and tests a new approach to combating online sex trafficking promulgated via online classified ad sites - nuisance telephone denial of service (TDoS) attacks on the advertisers. The method of attack is described and implications are discussed.}
        }
    
  47.  Open Access 

    An Efficient Approach for Cost Optimization of the Movement of Big Data

    Prasad Teli, Manoj V. Thomas, K. Chandrasekaran

    Open Journal of Big Data (OJBD), 1(1), Pages 4-15, 2015, Downloads: 10701, Citations: 11

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194335 | GNL-LP: 113236048X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement.

    BibTex:

        @Article{OJBD_2015v1i1n02_Teli,
            title     = {An Efficient Approach for Cost Optimization of the Movement of Big Data},
            author    = {Prasad Teli and
                         Manoj V. Thomas and
                         K. Chandrasekaran},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2015},
            volume    = {1},
            number    = {1},
            pages     = {4--15},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194335},
            urn       = {urn:nbn:de:101:1-201705194335},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement.}
        }
    
  48.  Open Access 

    A NoSQL-Based Framework for Managing Home Services

    Marinette Bouet, Michel Schneider

    Open Journal of Information Systems (OJIS), 3(1), Pages 1-28, 2016, Downloads: 10654, Citations: 1

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194810 | GNL-LP: 113236115X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Individuals and companies have an increasing need for services by specialized suppliers in their homes or premises. These services can be quite different and can require different amounts of resources. Service suppliers have to specify the activities to be performed, plan those activities, allocate resources, follow up after their completion and must be able to react to any unexpected situation. Various proposals were formulated to model and implement these functions; however, there is no unified approach that can improve the efficiency of software solutions to enable economy of scale. In this paper, we propose a framework that a service supplier can use to manage geo-localized activities. The proposed framework is based on a NoSQL data model and implemented using the MongoDB system. We also discuss the advantages and drawbacks of a NoSQL approach.

    BibTex:

        @Article{OJIS_2016v3i1n02_Marinette,
            title     = {A NoSQL-Based Framework for Managing Home Services},
            author    = {Marinette Bouet and
                         Michel Schneider},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {1--28},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194810},
            urn       = {urn:nbn:de:101:1-201705194810},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Individuals and companies have an increasing need for services by specialized suppliers in their homes or premises. These services can be quite different and can require different amounts of resources. Service suppliers have to specify the activities to be performed, plan those activities, allocate resources, follow up after their completion and must be able to react to any unexpected situation. Various proposals were formulated to model and implement these functions; however, there is no unified approach that can improve the efficiency of software solutions to enable economy of scale. In this paper, we propose a framework that a service supplier can use to manage geo-localized activities. The proposed framework is based on a NoSQL data model and implemented using the MongoDB system. We also discuss the advantages and drawbacks of a NoSQL approach.}
        }
    
  49.  Open Access 

    IT Governance Practices for Electric Utilities: Insights from Brazil and Europe

    Paulo Rupino da Cunha, Luiz Mauricio Martins, Antão Moura, António Dias de Figueiredo

    Open Journal of Information Systems (OJIS), 2(1), Pages 9-28, 2015, Downloads: 10624, Citations: 1

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194743 | GNL-LP: 1132361044 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: We propose a framework of 14 IT governance practices tailored for the electric utilities sector. They were selected and ranked as "essential", "important", or "good" by top executives and IT staff from two multi-billion dollar companies - one in Brazil and another in Europe - from a generic set of 83 collected in the literature and in the field. Our framework addresses a need of electric utilities for which specific guidance was lacking. We have also uncovered a significant impact of social issues in IT governance, whose depth seems to be missing in the current research. As a byproduct of our work, the larger generic framework from which we have departed and the tailoring method that we have proposed can be used to customize the generic framework to different industries.

    BibTex:

        @Article{OJIS_2015v2i1n02_Cunha,
            title     = {IT Governance Practices for Electric Utilities: Insights from Brazil and Europe},
            author    = {Paulo Rupino da Cunha and
                         Luiz Mauricio Martins and
                         Ant\~{a}o Moura and
                         Ant\'{o}nio Dias de Figueiredo},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {9--28},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194743},
            urn       = {urn:nbn:de:101:1-201705194743},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {We propose a framework of 14 IT governance practices tailored for the electric utilities sector. They were selected and ranked as "essential", "important", or "good" by top executives and IT staff from two multi-billion dollar companies - one in Brazil and another in Europe - from a generic set of 83 collected in the literature and in the field. Our framework addresses a need of electric utilities for which specific guidance was lacking. We have also uncovered a significant impact of social issues in IT governance, whose depth seems to be missing in the current research. As a byproduct of our work, the larger generic framework from which we have departed and the tailoring method that we have proposed can be used to customize the generic framework to different industries.}
        }
    
  50.  Open Access 

    Statistical Machine Learning in Brain State Classification using EEG Data

    Yuezhe Li, Yuchou Chang, Hong Lin

    Open Journal of Big Data (OJBD), 1(2), Pages 19-33, 2015, Downloads: 10621, Citations: 4

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194354 | GNL-LP: 113236051X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: In this article, we discuss how to use a variety of machine learning methods, e.g. tree bagging, random forest, boost, support vector machine, and Gaussian mixture model, for building classifiers for electroencephalogram (EEG) data, which is collected from different brain states on different subjects. Also, we discuss how training data size influences misclassification rate. Moreover, the number of subjects that contributes to the training data affects misclassification rate. Furthermore, we discuss how sample entropy contributes to building a classifier. Our results show that classification based on sample entropy give the smallest misclassification rate. Moreover, two data sets were collected from one channel and seven channels respectively. The classification results of each data set show that the more channels we use, the less misclassification we have. Our results show that it is promising to build a self-adaptive classification system by using EEG data to distinguish idle from active state.

    BibTex:

        @Article{OJBD_2015v1i2n03_YuehzeLi,
            title     = {Statistical Machine Learning in Brain State Classification using EEG Data},
            author    = {Yuezhe Li and
                         Yuchou Chang and
                         Hong Lin},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {19--33},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194354},
            urn       = {urn:nbn:de:101:1-201705194354},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {In this article, we discuss how to use a variety of machine learning methods, e.g. tree bagging, random forest, boost, support vector machine, and Gaussian mixture model, for building classifiers for electroencephalogram (EEG) data, which is collected from different brain states on different subjects. Also, we discuss how training data size influences misclassification rate. Moreover, the number of subjects that contributes to the training data affects misclassification rate. Furthermore, we discuss how sample entropy contributes to building a classifier. Our results show that classification based on sample entropy give the smallest misclassification rate. Moreover, two data sets were collected from one channel and seven channels respectively. The classification results of each data set show that the more channels we use, the less misclassification we have. Our results show that it is promising to build a self-adaptive classification system by using EEG data to distinguish idle from active state.}
        }
    
  1.  Open Access 

    Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing

    Karolj Skala, Davor Davidovic, Enis Afgan, Ivan Sovic, Zorislav Sojat

    Open Journal of Cloud Computing (OJCC), 2(1), Pages 16-24, 2015, Downloads: 21375, Citations: 168

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194519 | GNL-LP: 1132360749 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.

    BibTex:

        @Article{OJCC_2015v2i1n03_Skala,
            title     = {Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing},
            author    = {Karolj Skala and
                         Davor Davidovic and
                         Enis Afgan and
                         Ivan Sovic and
                         Zorislav Sojat},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {16--24},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194519},
            urn       = {urn:nbn:de:101:1-201705194519},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.}
        }
    
  2.  Open Access 

    Which NoSQL Database? A Performance Overview

    Veronika Abramova, Jorge Bernardino, Pedro Furtado

    Open Journal of Databases (OJDB), 1(2), Pages 17-24, 2014, Downloads: 28752, Citations: 89

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194607 | GNL-LP: 1132360862 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs.

    BibTex:

        @Article{OJDB-v1i2n02_Abramova,
            title     = {Which NoSQL Database? A Performance Overview},
            author    = {Veronika Abramova and
                         Jorge Bernardino and
                         Pedro Furtado},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {17--24},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194607},
            urn       = {urn:nbn:de:101:1-201705194607},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs.}
        }
    
  3.  Open Access 

    Definition and Categorization of Dew Computing

    Yingwei Wang

    Open Journal of Cloud Computing (OJCC), 3(1), Pages 1-7, 2016, Downloads: 12770, Citations: 69

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194546 | GNL-LP: 1132360781 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications.

    BibTex:

        @Article{OJCC_2016v3i1n02_YingweiWang,
            title     = {Definition and Categorization of Dew Computing},
            author    = {Yingwei Wang},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {1--7},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194546},
            urn       = {urn:nbn:de:101:1-201705194546},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications.}
        }
    
  4.  Open Access 

    Semantic Blockchain to Improve Scalability in the Internet of Things

    Michele Ruta, Floriano Scioscia, Saverio Ieva, Giovanna Capurso, Eugenio Di Sciascio

    Open Journal of Internet Of Things (OJIOT), 3(1), Pages 46-61, 2017, Downloads: 13080, Citations: 48

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613488 | GNL-LP: 1137820225 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Generally scarce computational and memory resource availability is a well known problem for the IoT, whose intrinsic volatility makes complex applications unfeasible. Noteworthy efforts in overcoming unpredictability (particularly in case of large dimensions) are the ones integrating Knowledge Representation technologies to build the so-called Semantic Web of Things (SWoT). In spite of allowed advanced discovery features, transactions in the SWoT still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective: a semantic resource/service discovery layer built upon a basic blockchain infrastructure gains a consensus validation. This paper proposes a novel Service-Oriented Architecture (SOA) based on a semantic blockchain for registration, discovery, selection and payment. Such operations are implemented as smart contracts, allowing distributed execution and trust. Reported experiments early assess the sustainability of the proposal.

    BibTex:

        @Article{OJIOT_2017v3i1n05_Ruta,
            title     = {Semantic Blockchain to Improve Scalability in the Internet of Things},
            author    = {Michele Ruta and
                         Floriano Scioscia and
                         Saverio Ieva and
                         Giovanna Capurso and
                         Eugenio Di Sciascio},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2017},
            volume    = {3},
            number    = {1},
            pages     = {46--61},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613488},
            urn       = {urn:nbn:de:101:1-2017080613488},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Generally scarce computational and memory resource availability is a well known problem for the IoT, whose intrinsic volatility makes complex applications unfeasible. Noteworthy efforts in overcoming unpredictability (particularly in case of large dimensions) are the ones integrating Knowledge Representation technologies to build the so-called Semantic Web of Things (SWoT). In spite of allowed advanced discovery features, transactions in the SWoT still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective: a semantic resource/service discovery layer built upon a basic blockchain infrastructure gains a consensus validation. This paper proposes a novel Service-Oriented Architecture (SOA) based on a semantic blockchain for registration, discovery, selection and payment. Such operations are implemented as smart contracts, allowing distributed execution and trust. Reported experiments early assess the sustainability of the proposal.}
        }
    
  5.  Open Access 

    The Potential of Printed Electronics and Personal Fabrication in Driving the Internet of Things

    Paulo Rosa, António Câmara, Cristina Gouveia

    Open Journal of Internet Of Things (OJIOT), 1(1), Pages 16-36, 2015, Downloads: 13921, Citations: 43

    Full-Text: pdf | URN: urn:nbn:de:101:1-201704244933 | GNL-LP: 1130621448 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: In the early nineties, Mark Weiser, a chief scientist at the Xerox Palo Alto Research Center (PARC), wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. Within this vision, computers and others digital technologies are integrated seamlessly into everyday objects and activities, hidden from our senses whenever not used or needed. An important facet of this vision is the interconnectivity of the various physical devices, which creates an Internet of Things. With the advent of Printed Electronics, new ways to link the physical and digital worlds became available. Common printing technologies, such as screen, flexography, and inkjet printing, are now starting to be used not only to mass-produce extremely thin, flexible and cost effective electronic circuits, but also to introduce electronic functionality into objects where it was previously unavailable. In turn, the growing accessibility to Personal Fabrication tools is leading to the democratization of the creation of technology by enabling end-users to design and produce their own material goods according to their needs. This paper presents a survey of commonly used technologies and foreseen applications in the field of Printed Electronics and Personal Fabrication, with emphasis on the potential to drive the Internet of Things.

    BibTex:

        @Article{OJIOT_2015v1i1n03_Rosa,
            title     = {The Potential of Printed Electronics and Personal Fabrication in Driving the Internet of Things},
            author    = {Paulo Rosa and
                         Ant\'{o}nio C\^{a}mara and
                         Cristina Gouveia},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2015},
            volume    = {1},
            number    = {1},
            pages     = {16--36},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244933},
            urn       = {urn:nbn:de:101:1-201704244933},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {In the early nineties, Mark Weiser, a chief scientist at the Xerox Palo Alto Research Center (PARC), wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. Within this vision, computers and others digital technologies are integrated seamlessly into everyday objects and activities, hidden from our senses whenever not used or needed. An important facet of this vision is the interconnectivity of the various physical devices, which creates an Internet of Things. With the advent of Printed Electronics, new ways to link the physical and digital worlds became available. Common printing technologies, such as screen, flexography, and inkjet printing, are now starting to be used not only to mass-produce extremely thin, flexible and cost effective electronic circuits, but also to introduce electronic functionality into objects where it was previously unavailable. In turn, the growing accessibility to Personal Fabrication tools is leading to the democratization of the creation of technology by enabling end-users to design and produce their own material goods according to their needs. This paper presents a survey of commonly used technologies and foreseen applications in the field of Printed Electronics and Personal Fabrication, with emphasis on the potential to drive the Internet of Things.}
        }
    
  6.  Open Access 

    Detecting Data-Flow Errors in BPMN 2.0

    Silvia von Stackelberg, Susanne Putze, Jutta Mülle, Klemens Böhm

    Open Journal of Information Systems (OJIS), 1(2), Pages 1-19, 2014, Downloads: 12912, Citations: 41

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017052611934 | GNL-LP: 1132836972 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Data-flow errors in BPMN 2.0 process models, such as missing or unused data, lead to undesired process executions. In particular, since BPMN 2.0 with a standardized execution semantics allows specifying alternatives for data as well as optional data, identifying missing or unused data systematically is difficult. In this paper, we propose an approach for detecting data-flow errors in BPMN 2.0 process models. We formalize BPMN process models by mapping them to Petri Nets and unfolding the execution semantics regarding data. We define a set of anti-patterns representing data-flow errors of BPMN 2.0 process models. By employing the anti-patterns, our tool performs model checking for the unfolded Petri Nets. The evaluation shows that it detects all data-flow errors identified by hand, and so improves process quality.

    BibTex:

        @Article{OJIS-2014v1i2n01_Stackelberg,
            title     = {Detecting Data-Flow Errors in BPMN 2.0},
            author    = {Silvia von Stackelberg and
                         Susanne Putze and
                         Jutta M{\"u}lle and
                         Klemens B{\"o}hm},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {1--19},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017052611934},
            urn       = {urn:nbn:de:101:1-2017052611934},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Data-flow errors in BPMN 2.0 process models, such as missing or unused data, lead to undesired process executions. In particular, since BPMN 2.0 with a standardized execution semantics allows specifying alternatives for data as well as optional data, identifying missing or unused data systematically is difficult. In this paper, we propose an approach for detecting data-flow errors in BPMN 2.0 process models. We formalize BPMN process models by mapping them to Petri Nets and unfolding the execution semantics regarding data. We define a set of anti-patterns representing data-flow errors of BPMN 2.0 process models. By employing the anti-patterns, our tool performs model checking for the unfolded Petri Nets. The evaluation shows that it detects all data-flow errors identified by hand, and so improves process quality.}
        }
    
  7.  Open Access 

    A Highly Scalable IoT Architecture through Network Function Virtualization

    Igor Miladinovic, Sigrid Schefer-Wenzl

    Open Journal of Internet Of Things (OJIOT), 3(1), Pages 127-135, 2017, Downloads: 7016, Citations: 22

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613543 | GNL-LP: 1137820284 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: As the number of devices for Internet of Things (IoT) is rapidly growing, existing communication infrastructures are forced to continually evolve. The next generation network infrastructure is expected to be virtualized and able to integrate different kinds of information technology resources. Network Functions Virtualization (NFV) is one of the leading concepts facilitating the operation of network services in a scalable manner. In this paper, we present an architecture involving NFV to meet the requirements of highly scalable IoT scenarios. We highlight the benefits and challenges of our approach for IoT stakeholders. Finally, the paper illustrates our vision of how the proposed architecture can be applied in the context of a state-of-the-art high-tech operating room, which we are going to realize in future work.

    BibTex:

        @Article{OJIOT_2017v3i1n11_Miladinovic,
            title     = {A Highly Scalable IoT Architecture through Network Function Virtualization},
            author    = {Igor Miladinovic and
                         Sigrid Schefer-Wenzl},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2017},
            volume    = {3},
            number    = {1},
            pages     = {127--135},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613543},
            urn       = {urn:nbn:de:101:1-2017080613543},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {As the number of devices for Internet of Things (IoT) is rapidly growing, existing communication infrastructures are forced to continually evolve. The next generation network infrastructure is expected to be virtualized and able to integrate different kinds of information technology resources. Network Functions Virtualization (NFV) is one of the leading concepts facilitating the operation of network services in a scalable manner. In this paper, we present an architecture involving NFV to meet the requirements of highly scalable IoT scenarios. We highlight the benefits and challenges of our approach for IoT stakeholders. Finally, the paper illustrates our vision of how the proposed architecture can be applied in the context of a state-of-the-art high-tech operating room, which we are going to realize in future work.}
        }
    
  8.  Open Access 

    Block-level De-duplication with Encrypted Data

    Pasquale Puzio, Refik Molva, Melek Önen, Sergio Loureiro

    Open Journal of Cloud Computing (OJCC), 1(1), Pages 10-18, 2014, Downloads: 12453, Citations: 20

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194448 | GNL-LP: 1132360617 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Deduplication is a storage saving technique which has been adopted by many cloud storage providers such as Dropbox. The simple principle of deduplication is that duplicate data uploaded by different users are stored only once. Unfortunately, deduplication is not compatible with encryption. As a scheme that allows deduplication of encrypted data segments, we propose ClouDedup, a secure and efficient storage service which guarantees blocklevel deduplication and data confidentiality at the same time. ClouDedup strengthens convergent encryption by employing a component that implements an additional encryption operation and an access control mechanism. We also propose to introduce an additional component which is in charge of providing a key management system for data blocks together with the actual deduplication operation. We show that the overhead introduced by these new components is minimal and does not impact the overall storage and computational costs.

    BibTex:

        @Article{OJCC-v1i1n02_Puzio,
            title     = {Block-level De-duplication with Encrypted Data},
            author    = {Pasquale Puzio and
                         Refik Molva and
                         Melek {\"O}nen and
                         Sergio Loureiro},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {10--18},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194448},
            urn       = {urn:nbn:de:101:1-201705194448},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Deduplication is a storage saving technique which has been adopted by many cloud storage providers such as Dropbox. The simple principle of deduplication is that duplicate data uploaded by different users are stored only once. Unfortunately, deduplication is not compatible with encryption. As a scheme that allows deduplication of encrypted data segments, we propose ClouDedup, a secure and efficient storage service which guarantees blocklevel deduplication and data confidentiality at the same time. ClouDedup strengthens convergent encryption by employing a component that implements an additional encryption operation and an access control mechanism. We also propose to introduce an additional component which is in charge of providing a key management system for data blocks together with the actual deduplication operation. We show that the overhead introduced by these new components is minimal and does not impact the overall storage and computational costs.}
        }
    
  9.  Open Access 

    Past, Present and Future of the ContextNet IoMT Middleware

    Markus Endler, Francisco Silva e Silva

    Open Journal of Internet Of Things (OJIOT), 4(1), Pages 7-23, 2018, Downloads: 3601, Citations: 19

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018080519323267622857 | GNL-LP: 1163928682 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The Internet of Things with support to mobility is already transforming many application domains, such as smart cities and homes, environmental monitoring, health care, manufacturing, logistics, public security etc. in that it allows to collect and analyze data from the environment, people and machines, and to implement some form of control or steering on these elements of the physical world. But in order to speed the development of applications for the Internet of Mobile Things (IoMT), some middleware is required. This paper summarizes seven years of research and development on the ContextNet middle ware aimed at IoMT, discusses what we achieved and what we have learned so far. We also share our vision of possible future challenges and developments in the Internet of Mobile Things.

    BibTex:

        @Article{OJIOT_2018v4i1n02_Endler,
            title     = {Past, Present and Future of the ContextNet IoMT Middleware},
            author    = {Markus Endler and
                         Francisco Silva e Silva},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {7--23},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018080519323267622857},
            urn       = {urn:nbn:de:101:1-2018080519323267622857},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The Internet of Things with support to mobility is already transforming many application domains, such as smart cities and homes, environmental monitoring, health care, manufacturing, logistics, public security etc. in that it allows to collect and analyze data from the environment, people and machines, and to implement some form of control or steering on these elements of the physical world. But in order to speed the development of applications for the Internet of Mobile Things (IoMT), some middleware is required. This paper summarizes seven years of research and development on the ContextNet middle ware aimed at IoMT, discusses what we achieved and what we have learned so far. We also share our vision of possible future challenges and developments in the Internet of Mobile Things.}
        }
    
  10.  Open Access 

    Cloud-Scale Entity Resolution: Current State and Open Challenges

    Xiao Chen, Eike Schallehn, Gunter Saake

    Open Journal of Big Data (OJBD), 4(1), Pages 30-51, 2018, Downloads: 5502, Citations: 18

    Full-Text: pdf | URN: urn:nbn:de:101:1-201804155766 | GNL-LP: 1156154723 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Entity resolution (ER) is a process to identify records in information systems, which refer to the same real-world entity. Because in the two recent decades the data volume has grown so large, parallel techniques are called upon to satisfy the ER requirements of high performance and scalability. The development of parallel ER has reached a relatively prosperous stage, and has found its way into several applications. In this work, we first comprehensively survey the state of the art of parallel ER approaches. From the comprehensive overview, we then extract the classification criteria of parallel ER, classify and compare these approaches based on these criteria. Finally, we identify open research questions and challenges and discuss potential solutions and further research potentials in this field.

    BibTex:

        @Article{OJBD_2018v4i1n03_Chen,
            title     = {Cloud-Scale Entity Resolution: Current State and Open Challenges},
            author    = {Xiao Chen and
                         Eike Schallehn and
                         Gunter Saake},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {30--51},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201804155766},
            urn       = {urn:nbn:de:101:1-201804155766},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Entity resolution (ER) is a process to identify records in information systems, which refer to the same real-world entity. Because in the two recent decades the data volume has grown so large, parallel techniques are called upon to satisfy the ER requirements of high performance and scalability. The development of parallel ER has reached a relatively prosperous stage, and has found its way into several applications. In this work, we first comprehensively survey the state of the art of parallel ER approaches. From the comprehensive overview, we then extract the classification criteria of parallel ER, classify and compare these approaches based on these criteria. Finally, we identify open research questions and challenges and discuss potential solutions and further research potentials in this field.}
        }
    
  11.  Open Access 

    Modelling the Integrated QoS for Wireless Sensor Networks with Heterogeneous Data Traffic

    Syarifah Ezdiani, Adnan Al-Anbuky

    Open Journal of Internet Of Things (OJIOT), 1(1), Pages 1-15, 2015, Downloads: 12112, Citations: 17

    Full-Text: pdf | URN: urn:nbn:de:101:1-201704244946 | GNL-LP: 1130621979 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The future of Internet of Things (IoT) is envisaged to consist of a high amount of wireless resource-constrained devices connected to the Internet. Moreover, a lot of novel real-world services offered by IoT devices are realized by wireless sensor networks (WSNs). Integrating WSN to the Internet has therefore brought forward the requirements of an end-to-end quality of service (QoS) guarantee. In this paper, the QoS requirements for the WSN-Internet integration are investigated by first distinguishing the Internet QoS from the WSN QoS. Next, this study emphasizes on WSN applications that involve traffic with different levels of importance, thus the way realtime traffic and delay-tolerant traffic are handled to guarantee QoS in the network is studied. Additionally, an overview of the integration strategies is given, and the delay-tolerant network (DTN) gateway, being one of the desirable approaches for integrating WSNs to the Internet, is discussed. Next, the implementation of the service model is presented, by considering both traffic prioritization and service differentiation. Based on the simulation results in OPNET Modeler, it is observed that real-time traffic achieve low bound delay while delay-tolerant traffic experience a lower packet dropped, hence indicating that the needs of real-time and delay-tolerant traffic can be better met by treating both packet types differently. Furthermore, a vehicular network is used as an example case to describe the applicability of the framework in a real IoT application environment, followed by a discussion on the future work of this research.

    BibTex:

        @Article{OJIOT_2015v1i1n02_Syarifah,
            title     = {Modelling the Integrated QoS for Wireless Sensor Networks with Heterogeneous Data Traffic},
            author    = {Syarifah Ezdiani and
                         Adnan Al-Anbuky},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2015},
            volume    = {1},
            number    = {1},
            pages     = {1--15},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244946},
            urn       = {urn:nbn:de:101:1-201704244946},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The future of Internet of Things (IoT) is envisaged to consist of a high amount of wireless resource-constrained devices connected to the Internet. Moreover, a lot of novel real-world services offered by IoT devices are realized by wireless sensor networks (WSNs). Integrating WSN to the Internet has therefore brought forward the requirements of an end-to-end quality of service (QoS) guarantee. In this paper, the QoS requirements for the WSN-Internet integration are investigated by first distinguishing the Internet QoS from the WSN QoS. Next, this study emphasizes on WSN applications that involve traffic with different levels of importance, thus the way realtime traffic and delay-tolerant traffic are handled to guarantee QoS in the network is studied. Additionally, an overview of the integration strategies is given, and the delay-tolerant network (DTN) gateway, being one of the desirable approaches for integrating WSNs to the Internet, is discussed. Next, the implementation of the service model is presented, by considering both traffic prioritization and service differentiation. Based on the simulation results in OPNET Modeler, it is observed that real-time traffic achieve low bound delay while delay-tolerant traffic experience a lower packet dropped, hence indicating that the needs of real-time and delay-tolerant traffic can be better met by treating both packet types differently. Furthermore, a vehicular network is used as an example case to describe the applicability of the framework in a real IoT application environment, followed by a discussion on the future work of this research.}
        }
    
  12.  Open Access 

    Machine Learning on Large Databases: Transforming Hidden Markov Models to SQL Statements

    Dennis Marten, Andreas Heuer

    Open Journal of Databases (OJDB), 4(1), Pages 22-42, 2017, Downloads: 5752, Citations: 16

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017100112181 | GNL-LP: 1140718215 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper.

    BibTex:

        @Article{OJDB_2017v4i1n02_Marten,
            title     = {Machine Learning on Large Databases: Transforming Hidden Markov Models to SQL Statements},
            author    = {Dennis Marten and
                         Andreas Heuer},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2017},
            volume    = {4},
            number    = {1},
            pages     = {22--42},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017100112181},
            urn       = {urn:nbn:de:101:1-2017100112181},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper.}
        }
    
  13.  Open Access 

    Eventual Consistent Databases: State of the Art

    Mawahib Musa Elbushra, Jan Lindström

    Open Journal of Databases (OJDB), 1(1), Pages 26-41, 2014, Downloads: 19433, Citations: 15

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194582 | GNL-LP: 1132360846 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: One of the challenges of cloud programming is to achieve the right balance between the availability and consistency in a distributed database. Cloud computing environments, particularly cloud databases, are rapidly increasing in importance, acceptance and usage in major applications, which need the partition-tolerance and availability for scalability purposes, but sacrifice the consistency side (CAP theorem). In these environments, the data accessed by users is stored in a highly available storage system, thus the use of paradigms such as eventual consistency became more widespread. In this paper, we review the state-of-the-art database systems using eventual consistency from both industry and research. Based on this review, we discuss the advantages and disadvantages of eventual consistency, and identify the future research challenges on the databases using eventual consistency.

    BibTex:

        @Article{OJDB-v1i1n03_Elbushra,
            title     = {Eventual Consistent Databases: State of the Art},
            author    = {Mawahib Musa Elbushra and
                         Jan Lindstr{\"o}m},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {26--41},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194582},
            urn       = {urn:nbn:de:101:1-201705194582},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {One of the challenges of cloud programming is to achieve the right balance between the availability and consistency in a distributed database. Cloud computing environments, particularly cloud databases, are rapidly increasing in importance, acceptance and usage in major applications, which need the partition-tolerance and availability for scalability purposes, but sacrifice the consistency side (CAP theorem). In these environments, the data accessed by users is stored in a highly available storage system, thus the use of paradigms such as eventual consistency became more widespread. In this paper, we review the state-of-the-art database systems using eventual consistency from both industry and research. Based on this review, we discuss the advantages and disadvantages of eventual consistency, and identify the future research challenges on the databases using eventual consistency.}
        }
    
  14.  Open Access 

    Accurate Distance Estimation between Things: A Self-correcting Approach

    Ho-sik Cho, Jianxun Ji, Zili Chen, Hyuncheol Park, Wonsuk Lee

    Open Journal of Internet Of Things (OJIOT), 1(2), Pages 19-27, 2015, Downloads: 26099, Citations: 15

    Full-Text: pdf | URN: urn:nbn:de:101:1-201704244959 | GNL-LP: 1130622525 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper suggests a method to measure the physical distance between an IoT device (a Thing) and a mobile device (also a Thing) using BLE (Bluetooth Low-Energy profile) interfaces with smaller distance errors. BLE is a well-known technology for the low-power connectivity and suitable for IoT devices as well as for the proximity with the range of several meters. Apple has already adopted the technique and enhanced it to provide subdivided proximity range levels. However, as it is also a variation of RSS-based distance estimation, Apple's iBeacon could only provide immediate, near or far status but not a real and accurate distance. To provide more accurate distance using BLE, this paper introduces additional self-correcting beacon to calibrate the reference distance and mitigate errors from environmental factors. By adopting self-correcting beacon for measuring the distance, the average distance error shows less than 10% within the range of 1.5 meters. Some considerations are presented to extend the range to be able to get more accurate distances.

    BibTex:

        @Article{OJIOT_2015v1i2n03_Cho,
            title     = {Accurate Distance Estimation between Things: A Self-correcting Approach},
            author    = {Ho-sik Cho and
                         Jianxun Ji and
                         Zili Chen and
                         Hyuncheol Park and
                         Wonsuk Lee},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {19--27},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244959},
            urn       = {urn:nbn:de:101:1-201704244959},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper suggests a method to measure the physical distance between an IoT device (a Thing) and a mobile device (also a Thing) using BLE (Bluetooth Low-Energy profile) interfaces with smaller distance errors. BLE is a well-known technology for the low-power connectivity and suitable for IoT devices as well as for the proximity with the range of several meters. Apple has already adopted the technique and enhanced it to provide subdivided proximity range levels. However, as it is also a variation of RSS-based distance estimation, Apple's iBeacon could only provide immediate, near or far status but not a real and accurate distance. To provide more accurate distance using BLE, this paper introduces additional self-correcting beacon to calibrate the reference distance and mitigate errors from environmental factors. By adopting self-correcting beacon for measuring the distance, the average distance error shows less than 10\% within the range of 1.5 meters. Some considerations are presented to extend the range to be able to get more accurate distances.}
        }
    
  15.  Open Access 

    Big Data in the Cloud: A Survey

    Pedro Caldeira Neves, Jorge Bernardino

    Open Journal of Big Data (OJBD), 1(2), Pages 1-18, 2015, Downloads: 12474, Citations: 14

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194365 | GNL-LP: 1132360528 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future.

    BibTex:

        @Article{OJBD_2015v1i2n02_Neves,
            title     = {Big Data in the Cloud: A Survey},
            author    = {Pedro Caldeira Neves and
                         Jorge Bernardino},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {1--18},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194365},
            urn       = {urn:nbn:de:101:1-201705194365},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future.}
        }
    
  16.  Open Access 

    Software-Defined Wireless Sensor Networks Approach: Southbound Protocol and Its Performance Evaluation

    Cintia B. Margi, Renan C. A. Alves, Gustavo A. Nunez Segura, Doriedson A. G. Oliveira

    Open Journal of Internet Of Things (OJIOT), 4(1), Pages 99-108, 2018, Downloads: 4144, Citations: 14

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018080519305710189607 | GNL-LP: 1163928550 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Software Defined Networking (SDN) has been identified as a promising network paradigm for Wireless Sensor Networks (WSN) and the Internet of Things. It is a key tool for enabling Sensing as a Service, which provides infrastructure sharing thus reducing operational costs. While a few proposals on SDN southbound protocols designed for WSN are found in the literature, they lack adequate performance analysis. In this paper, we review ITSDN main features and present a performance evaluation with all the sensing nodes transmitting data periodically. We conducted a number of experiments varying the number of nodes and assessing the impact of flow table maximum capacity. We assessed the metrics of data delivery, data delay, control overhead and energy consumption in order to show the tradeoffs of using IT-SDN in comparison to the IETF RPL routing protocol. We discuss the main challenges still faced by IT-SDN in larger WSN, and how they could be addressed to make IT-SDN use worthwhile.

    BibTex:

        @Article{OJIOT_2018v4i1n08_Margi,
            title     = {Software-Defined Wireless Sensor Networks Approach: Southbound Protocol and Its Performance Evaluation},
            author    = {Cintia B. Margi and
                         Renan C. A. Alves and
                         Gustavo A. Nunez Segura and
                         Doriedson A. G. Oliveira},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {99--108},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018080519305710189607},
            urn       = {urn:nbn:de:101:1-2018080519305710189607},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Software Defined Networking (SDN) has been identified as a promising network paradigm for Wireless Sensor Networks (WSN) and the Internet of Things. It is a key tool for enabling Sensing as a Service, which provides infrastructure sharing thus reducing operational costs. While a few proposals on SDN southbound protocols designed for WSN are found in the literature, they lack adequate performance analysis. In this paper, we review ITSDN main features and present a performance evaluation with all the sensing nodes transmitting data periodically. We conducted a number of experiments varying the number of nodes and assessing the impact of flow table maximum capacity. We assessed the metrics of data delivery, data delay, control overhead and energy consumption in order to show the tradeoffs of using IT-SDN in comparison to the IETF RPL routing protocol. We discuss the main challenges still faced by IT-SDN in larger WSN, and how they could be addressed to make IT-SDN use worthwhile.}
        }
    
  17.  Open Access 

    Smartwatch-Based IoT Fall Detection Application

    Anne H. Ngu, Po-Teng Tseng, Manvick Paliwal, Christopher Carpenter, Walker Stipe

    Open Journal of Internet Of Things (OJIOT), 4(1), Pages 87-98, 2018, Downloads: 7306, Citations: 13

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018080519304951282148 | GNL-LP: 1163928534 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper proposes using only the streaming accelerometer data from a commodity-based smartwatch (IoT) device to detect falls. The smartwatch is paired with a smartphone as a means for performing the computation necessary for the prediction of falls in realtime without incurring latency in communicating with a cloud server while also preserving data privacy. The majority of current fall detection applications require specially designed hardware and software which make them expensive and inaccessible to the general public. Moreover, a fall detection application that uses a wrist worn smartwatch for data collection has the added benefit that it can be perceived as a piece of jewelry and thus non-intrusive. We experimented with both Support Vector Machine and Naive Bayes machine learning algorithms for the creation of the fall model. We demonstrated that by adjusting the sampling frequency of the streaming data, computing acceleration features over a sliding window, and using a Naive Bayes machine learning model, we can obtain the true positive rate of fall detection in real-world setting with 93.33% accuracy. Our result demonstrated that using a commodity-based smartwatch sensor can yield fall detection results that are competitive with those of custom made expensive sensors.

    BibTex:

        @Article{OJIOT_2018v4i1n07_Ngu,
            title     = {Smartwatch-Based IoT Fall Detection Application},
            author    = {Anne H. Ngu and
                         Po-Teng Tseng and
                         Manvick Paliwal and
                         Christopher Carpenter and
                         Walker Stipe},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {87--98},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018080519304951282148},
            urn       = {urn:nbn:de:101:1-2018080519304951282148},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper proposes using only the streaming accelerometer data from a commodity-based smartwatch (IoT) device to detect falls. The smartwatch is paired with a smartphone as a means for performing the computation necessary for the prediction of falls in realtime without incurring latency in communicating with a cloud server while also preserving data privacy. The majority of current fall detection applications require specially designed hardware and software which make them expensive and inaccessible to the general public. Moreover, a fall detection application that uses a wrist worn smartwatch for data collection has the added benefit that it can be perceived as a piece of jewelry and thus non-intrusive. We experimented with both Support Vector Machine and Naive Bayes machine learning algorithms for the creation of the fall model. We demonstrated that by adjusting the sampling frequency of the streaming data, computing acceleration features over a sliding window, and using a Naive Bayes machine learning model, we can obtain the true positive rate of fall detection in real-world setting with 93.33\% accuracy. Our result demonstrated that using a commodity-based smartwatch sensor can yield fall detection results that are competitive with those of custom made expensive sensors.}
        }
    
  18.  Open Access 

    An Introductory Approach to Risk Visualization as a Service

    Victor Chang

    Open Journal of Cloud Computing (OJCC), 1(1), Pages 1-9, 2014, Downloads: 9615, Citations: 13

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194429 | GNL-LP: 1132360595 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper introduces the Risk Visualization as a Service (RVaaS) and presents the motivation, rationale, methodology, Cloud APIs used, operations and examples of using RVaaS. Risks can be calculated within seconds and presented in the form of Visualization to ensure that unexploited areas are ex-posed. RVaaS operates in two phases. The first phase includes the risk modeling in Black Scholes Model (BSM), creating 3D Visualization and Analysis. The second phase consists of calculating key derivatives such as Delta and Theta for financial modeling. Risks presented in visualization allow the potential investors and stakeholders to keep track of the status of risk with regard to time, prices and volatility. Our approach can improve accuracy and performance. Results in experiments show that RVaaS can perform up to 500,000 simulations and complete all simulations within 24 seconds for time steps of up to 50. We also introduce financial stock market analysis (FSMA) that can fully blend with RVaaS and demonstrate two examples that can help investors make better decision based on the pricing and market volatility information. RVaaS provides a structured way to deploy low cost, high quality risk assessment and support real-time calculations.

    BibTex:

        @Article{OJCC-v1i1n01_Chang,
            title     = {An Introductory Approach to Risk Visualization as a Service},
            author    = {Victor Chang},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {1--9},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194429},
            urn       = {urn:nbn:de:101:1-201705194429},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper introduces the Risk Visualization as a Service (RVaaS) and presents the motivation, rationale, methodology, Cloud APIs used, operations and examples of using RVaaS. Risks can be calculated within seconds and presented in the form of Visualization to ensure that unexploited areas are ex-posed. RVaaS operates in two phases. The first phase includes the risk modeling in Black Scholes Model (BSM), creating 3D Visualization and Analysis. The second phase consists of calculating key derivatives such as Delta and Theta for financial modeling. Risks presented in visualization allow the potential investors and stakeholders to keep track of the status of risk with regard to time, prices and volatility. Our approach can improve accuracy and performance. Results in experiments show that RVaaS can perform up to 500,000 simulations and complete all simulations within 24 seconds for time steps of up to 50. We also introduce financial stock market analysis (FSMA) that can fully blend with RVaaS and demonstrate two examples that can help investors make better decision based on the pricing and market volatility information. RVaaS provides a structured way to deploy low cost, high quality risk assessment and support real-time calculations.}
        }
    
  19.  Open Access 

    Designing a Benchmark for the Assessment of Schema Matching Tools

    Fabien Duchateau, Zohra Bellahsene

    Open Journal of Databases (OJDB), 1(1), Pages 3-25, 2014, Downloads: 10259, Citations: 13

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194573 | GNL-LP: 1132360838 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Over the years, many schema matching approaches have been developed to discover correspondences between schemas. Although this task is crucial in data integration, its evaluation, both in terms of matching quality and time performance, is still manually performed. Indeed, there is no common platform which gathers a collection of schema matching datasets to fulfil this goal. Another problem deals with the measuring of the post-match effort, a human cost that schema matching approaches aim at reducing. Consequently, we propose XBenchMatch, a schema matching benchmark with available datasets and new measures to evaluate this manual post-match effort and the quality of integrated schemas. We finally report the results obtained by different approaches, namely COMA++, Similarity Flooding and YAM. We show that such a benchmark is required to understand the advantages and failures of schema matching approaches. Therefore, it could help an end-user to select a schema matching tool which covers his/her needs.

    BibTex:

        @Article{OJDB-v1i1n02_Duchateau,
            title     = {Designing a Benchmark for the Assessment of Schema Matching Tools},
            author    = {Fabien Duchateau and
                         Zohra Bellahsene},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {3--25},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194573},
            urn       = {urn:nbn:de:101:1-201705194573},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Over the years, many schema matching approaches have been developed to discover correspondences between schemas. Although this task is crucial in data integration, its evaluation, both in terms of matching quality and time performance, is still manually performed. Indeed, there is no common platform which gathers a collection of schema matching datasets to fulfil this goal. Another problem deals with the measuring of the post-match effort, a human cost that schema matching approaches aim at reducing. Consequently, we propose XBenchMatch, a schema matching benchmark with available datasets and new measures to evaluate this manual post-match effort and the quality of integrated schemas. We finally report the results obtained by different approaches, namely COMA++, Similarity Flooding and YAM. We show that such a benchmark is required to understand the advantages and failures of schema matching approaches. Therefore, it could help an end-user to select a schema matching tool which covers his/her needs.}
        }
    
  20.  Open Access 

    Mitigating Radio Interference in Large IoT Networks through Dynamic CCA Adjustment

    Tommy Sparber, Carlo Alberto Boano, Salil S. Kanhere, Kay Römer

    Open Journal of Internet Of Things (OJIOT), 3(1), Pages 103-113, 2017, Downloads: 7311, Citations: 11

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613511 | GNL-LP: 113782025X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The performance of low-power wireless sensor networks used to build Internet of Things applications often suffers from radio interference generated by co-located wireless devices or from jammers maliciously placed in their proximity. As IoT devices typically operate in unsupervised large-scale installations, and as radio interference is typically localized and hence affects only a portion of the nodes in the network, it is important to give low-power wireless sensors and actuators the ability to autonomously mitigate the impact of surrounding interference. In this paper we present our approach DynCCA, which dynamically adapts the clear channel assessment threshold of IoT devices to minimize the impact of malicious or unintentional interference on both network reliability and energy efficiency. First, we describe how varying the clear channel assessment threshold at run-time using only information computed locally can help to minimize the impact of unintentional interference from surrounding devices and to escape jamming attacks. We then present the design and implementation of DynCCA on top of ContikiMAC and evaluate its performance on wireless sensor nodes equipped with IEEE 802.15.4 radios. Our experimental investigation shows that the use of DynCCA in dense IoT networks can increase the packet reception rate by up to 50% and reduce the energy consumption by a factor of 4.

    BibTex:

        @Article{OJIOT_2017v3i1n09_Sparber,
            title     = {Mitigating Radio Interference in Large IoT Networks through Dynamic CCA Adjustment},
            author    = {Tommy Sparber and
                         Carlo Alberto Boano and
                         Salil S. Kanhere and
                         Kay R{\"o}mer},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2017},
            volume    = {3},
            number    = {1},
            pages     = {103--113},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613511},
            urn       = {urn:nbn:de:101:1-2017080613511},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The performance of low-power wireless sensor networks used to build Internet of Things applications often suffers from radio interference generated by co-located wireless devices or from jammers maliciously placed in their proximity. As IoT devices typically operate in unsupervised large-scale installations, and as radio interference is typically localized and hence affects only a portion of the nodes in the network, it is important to give low-power wireless sensors and actuators the ability to autonomously mitigate the impact of surrounding interference. In this paper we present our approach DynCCA, which dynamically adapts the clear channel assessment threshold of IoT devices to minimize the impact of malicious or unintentional interference on both network reliability and energy efficiency. First, we describe how varying the clear channel assessment threshold at run-time using only information computed locally can help to minimize the impact of unintentional interference from surrounding devices and to escape jamming attacks. We then present the design and implementation of DynCCA on top of ContikiMAC and evaluate its performance on wireless sensor nodes equipped with IEEE 802.15.4 radios. Our experimental investigation shows that the use of DynCCA in dense IoT networks can increase the packet reception rate by up to 50\% and reduce the energy consumption by a factor of 4.}
        }
    
  21.  Open Access 

    An Efficient Approach for Cost Optimization of the Movement of Big Data

    Prasad Teli, Manoj V. Thomas, K. Chandrasekaran

    Open Journal of Big Data (OJBD), 1(1), Pages 4-15, 2015, Downloads: 10701, Citations: 11

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194335 | GNL-LP: 113236048X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement.

    BibTex:

        @Article{OJBD_2015v1i1n02_Teli,
            title     = {An Efficient Approach for Cost Optimization of the Movement of Big Data},
            author    = {Prasad Teli and
                         Manoj V. Thomas and
                         K. Chandrasekaran},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2015},
            volume    = {1},
            number    = {1},
            pages     = {4--15},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194335},
            urn       = {urn:nbn:de:101:1-201705194335},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement.}
        }
    
  22.  Open Access 

    Relationship between Externalized Knowledge and Evaluation in the Process of Creating Strategic Scenarios

    Teruaki Hayashi, Yukio Ohsawa

    Open Journal of Information Systems (OJIS), 2(1), Pages 29-40, 2015, Downloads: 6332, Citations: 10

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194751 | GNL-LP: 1132361079 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Social systems are changing so rapidly that it is important for humans to make decisions considering uncertainty. A scenario is information about the series of events/actions, which supports decision makers to take actions and reduce risks. We propose Action Planning for refining simple ideas into practical scenarios (strategic scenarios). Frameworks and items on Action Planning Sheets provide participants with organized constraints, to lead to creative and logical thinking for solving real issues in businesses or daily life. Communication among participants who have preset roles leads the externalization of knowledge. In this study, we set three criteria for evaluating strategic scenarios; novelty, utility, and feasibility, and examine the relationship between externalized knowledge and the evaluation values, in order to consider factors which affect the evaluations. Regarding a word contained in roles and scenarios as the smallest unit of knowledge, we calculate Relativeness between roles and scenarios. The results of our experiment suggest that the lower the relativeness of a strategic scenario, the higher the strategic scenario is evaluated in novelty. In addition, in the evaluation of utility, a scenario satisfying a covert requirement tends to be estimated higher. Moreover, we found the externalization of stakeholders may affect the realization of strategic scenarios.

    BibTex:

        @Article{OJIS_2015v2i1n03_Hayashi,
            title     = {Relationship between Externalized Knowledge and Evaluation in the Process of Creating Strategic Scenarios},
            author    = {Teruaki Hayashi and
                         Yukio Ohsawa},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {29--40},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194751},
            urn       = {urn:nbn:de:101:1-201705194751},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Social systems are changing so rapidly that it is important for humans to make decisions considering uncertainty. A scenario is information about the series of events/actions, which supports decision makers to take actions and reduce risks. We propose Action Planning for refining simple ideas into practical scenarios (strategic scenarios). Frameworks and items on Action Planning Sheets provide participants with organized constraints, to lead to creative and logical thinking for solving real issues in businesses or daily life. Communication among participants who have preset roles leads the externalization of knowledge. In this study, we set three criteria for evaluating strategic scenarios; novelty, utility, and feasibility, and examine the relationship between externalized knowledge and the evaluation values, in order to consider factors which affect the evaluations. Regarding a word contained in roles and scenarios as the smallest unit of knowledge, we calculate Relativeness between roles and scenarios. The results of our experiment suggest that the lower the relativeness of a strategic scenario, the higher the strategic scenario is evaluated in novelty. In addition, in the evaluation of utility, a scenario satisfying a covert requirement tends to be estimated higher. Moreover, we found the externalization of stakeholders may affect the realization of strategic scenarios.}
        }
    
  23.  Open Access 

    MapReduce-based Solutions for Scalable SPARQL Querying

    José M. Giménez-Garcia, Javier D. Fernández, Miguel A. Martínez-Prieto

    Open Journal of Semantic Web (OJSW), 1(1), Pages 1-18, 2014, Downloads: 11267, Citations: 10

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194824 | GNL-LP: 1132361168 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The use of RDF to expose semantic data on the Web has seen a dramatic increase over the last few years. Nowadays, RDF datasets are so big and rconnected that, in fact, classical mono-node solutions present significant scalability problems when trying to manage big semantic data. MapReduce, a standard framework for distributed processing of great quantities of data, is earning a place among the distributed solutions facing RDF scalability issues. In this article, we survey the most important works addressing RDF management and querying through diverse MapReduce approaches, with a focus on their main strategies, optimizations and results.

    BibTex:

        @Article{OJSW-v1i1n02_Garcia,
            title     = {MapReduce-based Solutions for Scalable SPARQL Querying},
            author    = {Jos\'{e} M. Gim\'{e}nez-Garcia and
                         Javier D. Fern\'{a}ndez and
                         Miguel A. Mart\'{i}nez-Prieto},
            journal   = {Open Journal of Semantic Web (OJSW)},
            issn      = {2199-336X},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {1--18},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194824},
            urn       = {urn:nbn:de:101:1-201705194824},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The use of RDF to expose semantic data on the Web has seen a dramatic increase over the last few years. Nowadays, RDF datasets are so big and rconnected that, in fact, classical mono-node solutions present significant scalability problems when trying to manage big semantic data. MapReduce, a standard framework for distributed processing of great quantities of data, is earning a place among the distributed solutions facing RDF scalability issues. In this article, we survey the most important works addressing RDF management and querying through diverse MapReduce approaches, with a focus on their main strategies, optimizations and results.}
        }
    
  24.  Open Access 

    Doing More with the Dew: A New Approach to Cloud-Dew Architecture

    David Edward Fisher, Shuhui Yang

    Open Journal of Cloud Computing (OJCC), 3(1), Pages 8-19, 2016, Downloads: 9936, Citations: 9

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194535 | GNL-LP: 1132360773 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: While the popularity of cloud computing is exploding, a new network computing paradigm is just beginning. In this paper, we examine this exciting area of research known as dew computing and propose a new design of cloud-dew architecture. Instead of hosting only one dew server on a user's PC - as adopted in the current dewsite application - our design promotes the hosting of multiple dew servers instead, one for each installed domain. Our design intends to improve upon existing cloud-dew architecture by providing significantly increased freedom in dewsite development, while also automating the chore of managing dewsite content based on the user's interests and browsing habits. Other noteworthy benefits, all at no added cost to dewsite users, are briefly explored as well.

    BibTex:

        @Article{OJCC_2016v3i1n02_Fisher,
            title     = {Doing More with the Dew: A New Approach to Cloud-Dew Architecture},
            author    = {David Edward Fisher and
                         Shuhui Yang},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {8--19},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194535},
            urn       = {urn:nbn:de:101:1-201705194535},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {While the popularity of cloud computing is exploding, a new network computing paradigm is just beginning. In this paper, we examine this exciting area of research known as dew computing and propose a new design of cloud-dew architecture. Instead of hosting only one dew server on a user's PC - as adopted in the current dewsite application - our design promotes the hosting of multiple dew servers instead, one for each installed domain. Our design intends to improve upon existing cloud-dew architecture by providing significantly increased freedom in dewsite development, while also automating the chore of managing dewsite content based on the user's interests and browsing habits. Other noteworthy benefits, all at no added cost to dewsite users, are briefly explored as well.}
        }
    
  25.  Open Access 

    Sensing as a Service: Secure Wireless Sensor Network Infrastructure Sharing for the Internet of Things

    Cintia B. Margi, Renan C. A. Alves, Johanna Sepulveda

    Open Journal of Internet Of Things (OJIOT), 3(1), Pages 91-102, 2017, Downloads: 5914, Citations: 9

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613467 | GNL-LP: 1137820209 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Internet of Things (IoT) andWireless Sensor Networks (WSN) are composed of devices capable of sensing/actuation, communication and processing. They are valuable technology for the development of applications in several areas, such as environmental, industrial and urban monitoring and processes controlling. Given the challenges of different protocols and technologies used for communication, resource constrained devices nature, high connectivity and security requirements for the applications, the main challenges that need to be addressed include: secure communication between IoT devices, network resource management and the protected implementation of the security mechanisms. In this paper, we present a secure Software-Defined Networking (SDN) based framework that includes: communication protocols, node task programming middleware, communication and computation resource management features and security services. The communication layer for the constrained devices considers IT-SDN as its basis. Concerning security, we address the main services, the type of algorithms to achieve them, and why their secure implementation is needed. Lastly, we showcase how the Sensing as a Service paradigm could enable WSN usage in more environments.

    BibTex:

        @Article{OJIOT_2017v3i1n08_Margi,
            title     = {Sensing as a Service: Secure Wireless Sensor Network Infrastructure Sharing for the Internet of Things},
            author    = {Cintia B. Margi and
                         Renan C. A. Alves and
                         Johanna Sepulveda},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2017},
            volume    = {3},
            number    = {1},
            pages     = {91--102},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613467},
            urn       = {urn:nbn:de:101:1-2017080613467},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Internet of Things (IoT) andWireless Sensor Networks (WSN) are composed of devices capable of sensing/actuation, communication and processing. They are valuable technology for the development of applications in several areas, such as environmental, industrial and urban monitoring and processes controlling. Given the challenges of different protocols and technologies used for communication, resource constrained devices nature, high connectivity and security requirements for the applications, the main challenges that need to be addressed include: secure communication between IoT devices, network resource management and the protected implementation of the security mechanisms. In this paper, we present a secure Software-Defined Networking (SDN) based framework that includes: communication protocols, node task programming middleware, communication and computation resource management features and security services. The communication layer for the constrained devices considers IT-SDN as its basis. Concerning security, we address the main services, the type of algorithms to achieve them, and why their secure implementation is needed. Lastly, we showcase how the Sensing as a Service paradigm could enable WSN usage in more environments.}
        }
    
  26.  Open Access 

    Perceived Sociability of Use and Individual Use of Social Networking Sites - A Field Study of Facebook Use in the Arctic

    Juhani Iivari

    Open Journal of Information Systems (OJIS), 1(1), Pages 23-53, 2014, Downloads: 11095, Citations: 9

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194708 | GNL-LP: 1132360978 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper investigates determinants of individual use of social network sites (SNSs). It introduces a new construct, Perceived Sociability of Use (PSOU), to explain the use of such computer mediated communication applications. Based on a field study of 113 Facebook users it shows that PSOU in the sense of maintaining social contacts is a significant predictor of Perceived Benefits (PB), Perceived Enjoyment (PE), attitude toward use and intention to use. Inspired by Benbasat and Barki, this paper also attempts to answer questions "what makes the system useful", "what makes the system enjoyable to use" and "what makes the system sociable to use". As a consequence it pays special focus on systems characteristics of IT applications as potential predictors of PSOU, PB and PE, introducing seven such designable qualities (user-to-user interactivity, user identifiability, system quality, information quality, usability, user-to-system interactivity, and aesthetics). The results indicate that especially satisfaction with user-to-user interactivity is a significant determinant of PSOU, and that satisfactions with six of these seven designable qualities have significant paths in the proposed nomological network.

    BibTex:

        @Article{OJIS-v1i1n03_Iivari,
            title     = {Perceived Sociability of Use and Individual Use of Social Networking Sites - A Field Study of Facebook Use in the Arctic},
            author    = {Juhani Iivari},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {23--53},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194708},
            urn       = {urn:nbn:de:101:1-201705194708},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper investigates determinants of individual use of social network sites (SNSs). It introduces a new construct, Perceived Sociability of Use (PSOU), to explain the use of such computer mediated communication applications. Based on a field study of 113 Facebook users it shows that PSOU in the sense of maintaining social contacts is a significant predictor of Perceived Benefits (PB), Perceived Enjoyment (PE), attitude toward use and intention to use. Inspired by Benbasat and Barki, this paper also attempts to answer questions "what makes the system useful", "what makes the system enjoyable to use" and "what makes the system sociable to use". As a consequence it pays special focus on systems characteristics of IT applications as potential predictors of PSOU, PB and PE, introducing seven such designable qualities (user-to-user interactivity, user identifiability, system quality, information quality, usability, user-to-system interactivity, and aesthetics). The results indicate that especially satisfaction with user-to-user interactivity is a significant determinant of PSOU, and that satisfactions with six of these seven designable qualities have significant paths in the proposed nomological network.}
        }
    
  27.  Open Access 

    Measuring and analyzing German and Spanish customer satisfaction of using the iPhone 4S Mobile Cloud service

    Victor Chang

    Open Journal of Cloud Computing (OJCC), 1(1), Pages 19-26, 2014, Downloads: 7310, Citations: 8

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194450 | GNL-LP: 1132360633 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper presents the customer satisfaction analysis for measuring popularity in the Mobile Cloud, which is an emerging area in the Cloud and Big Data Computing. Organizational Sustainability Modeling (OSM) is the proposed method used in this research. The twelve-month of German and Spanish consumer data are used for the analysis to investigate the return and risk status associated with the ratings of customer satisfaction in the iPhone 4S Mobile Cloud services. Results show that there is a decline in the satisfaction ratings in Germany and Spain due to economic downturn and competitions in the market, which support our hypothesis. Key outputs have been explained and they confirm that all analysis and interpretations fulfill the criteria for OSM. The use of statistical and visualization method proposed by OSM can expose unexploited data and allows the stakeholders to understand the status of return and risk of their Cloud strategies easier than the use of other data analysis.

    BibTex:

        @Article{OJCC-v1i1n03_Chang,
            title     = {Measuring and analyzing German and Spanish customer satisfaction of using the iPhone 4S Mobile Cloud service},
            author    = {Victor Chang},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {19--26},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194450},
            urn       = {urn:nbn:de:101:1-201705194450},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper presents the customer satisfaction analysis for measuring popularity in the Mobile Cloud, which is an emerging area in the Cloud and Big Data Computing. Organizational Sustainability Modeling (OSM) is the proposed method used in this research. The twelve-month of German and Spanish consumer data are used for the analysis to investigate the return and risk status associated with the ratings of customer satisfaction in the iPhone 4S Mobile Cloud services. Results show that there is a decline in the satisfaction ratings in Germany and Spain due to economic downturn and competitions in the market, which support our hypothesis. Key outputs have been explained and they confirm that all analysis and interpretations fulfill the criteria for OSM. The use of statistical and visualization method proposed by OSM can expose unexploited data and allows the stakeholders to understand the status of return and risk of their Cloud strategies easier than the use of other data analysis.}
        }
    
  28.  Open Access 

    Experimentation and Analysis of Ensemble Deep Learning in IoT Applications

    Taylor Mauldin, Anne H. Ngu, Vangelis Metsis, Marc E. Canby, Jelena Tesic

    Open Journal of Internet Of Things (OJIOT), 5(1), Pages 133-149, 2019, Downloads: 4002, Citations: 8

    Full-Text: pdf | URN: urn:nbn:de:101:1-2019092919352344146661 | GNL-LP: 119598636X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This paper presents an experimental study of Ensemble Deep Learning (DL) techniques for the analysis of time series data on IoT devices. We have shown in our earlier work that DL demonstrates superior performance compared to traditional machine learning techniques on fall detection applications due to the fact that important features in time series data can be learned and need not be determined manually by the domain expert. However, DL networks generally require large datasets for training. In the health care domain, such as the real-time smartwatch-based fall detection, there are no publicly available large annotated datasets that can be used for training, due to the nature of the problem (i.e. a fall is not a common event). Moreover, fall data is also inherently noisy since motions generated by the wrist-worn smartwatch can be mistaken for a fall. This paper explores combing DL (Recurrent Neural Network) with ensemble techniques (Stacking and AdaBoosting) using a fall detection application as a case study. We conducted a series of experiments using two different datasets of simulated falls for training various ensemble models. Our results show that an ensemble of deep learning models combined by the stacking ensemble technique, outperforms a single deep learning model trained on the same data samples, and thus, may be better suited for small-size datasets.

    BibTex:

        @Article{OJIOT_2019v5i1n11_Mauldin,
            title     = {Experimentation and Analysis of Ensemble Deep Learning in IoT Applications},
            author    = {Taylor Mauldin and
                         Anne H. Ngu and
                         Vangelis Metsis and
                         Marc E. Canby and
                         Jelena Tesic},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2019},
            volume    = {5},
            number    = {1},
            pages     = {133--149},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2019092919352344146661},
            urn       = {urn:nbn:de:101:1-2019092919352344146661},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This paper presents an experimental study of Ensemble Deep Learning (DL) techniques for the analysis of time series data on IoT devices. We have shown in our earlier work that DL demonstrates superior performance compared to traditional machine learning techniques on fall detection applications due to the fact that important features in time series data can be learned and need not be determined manually by the domain expert. However, DL networks generally require large datasets for training. In the health care domain, such as the real-time smartwatch-based fall detection, there are no publicly available large annotated datasets that can be used for training, due to the nature of the problem (i.e. a fall is not a common event). Moreover, fall data is also inherently noisy since motions generated by the wrist-worn smartwatch can be mistaken for a fall. This paper explores combing DL (Recurrent Neural Network) with ensemble techniques (Stacking and AdaBoosting) using a fall detection application as a case study. We conducted a series of experiments using two different datasets of simulated falls for training various ensemble models. Our results show that an ensemble of deep learning models combined by the stacking ensemble technique, outperforms a single deep learning model trained on the same data samples, and thus, may be better suited for small-size datasets.}
        }
    
  29.  Open Access 

    An Architecture for Distributed Video Stream Processing in IoMT Systems

    Aluizio Rocha Neto, Thiago P. Silva, Thais V. Batista, Flavia C. Delicato, Paulo F. Pires, Frederico Lopes

    Open Journal of Internet Of Things (OJIOT), 6(1), Pages 89-104, 2020, Downloads: 3191, Citations: 8

    Full-Text: pdf | URN: urn:nbn:de:101:1-2020080219341417043601 | GNL-LP: 1215016921 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: In Internet of Multimedia Things (IoMT) systems, Internet cameras installed in buildings and streets are major sources of sensing data. From these large-scale video streams, it is possible to infer various information providing the current status of the monitored environments. Some events of interest that have occurred in these observed locations produce insights that might demand near real-time responses from the system. In this context, the event processing depends on data freshness, and computation time, otherwise, the processing results and activities become less valuable or even worthless. An encouraging plan to support the computational demand for latency-sensitive applications of largely geo-distributed systems is applying Edge Computing resources to perform the video stream processing stages. However, some of these stages use deep learning methods for the detection and identification of objects of interest, which are voracious consumers of computational resources. To address these issues, this work proposes an architecture to distribute the video stream processing stages in multiple tasks running on different edge nodes, reducing network overhead and consequent delays. The Multilevel Information Fusion Edge Architecture (MELINDA) encapsulates the data analytics algorithms provided by machine learning methods in different types of processing tasks organized by multiple data-abstraction levels. This distribution strategy, combined with the new category of Edge AI hardware specifically designed to develop smart systems, is a promising approach to address the resource limitations of edge devices.

    BibTex:

        @Article{OJIOT_2020v6i1n09_Neto,
            title     = {An Architecture for Distributed Video Stream Processing in IoMT Systems},
            author    = {Aluizio Rocha Neto and
                         Thiago P. Silva and
                         Thais V. Batista and
                         Flavia C. Delicato and
                         Paulo F. Pires and
                         Frederico Lopes},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2020},
            volume    = {6},
            number    = {1},
            pages     = {89--104},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2020080219341417043601},
            urn       = {urn:nbn:de:101:1-2020080219341417043601},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {In Internet of Multimedia Things (IoMT) systems, Internet cameras installed in buildings and streets are major sources of sensing data. From these large-scale video streams, it is possible to infer various information providing the current status of the monitored environments. Some events of interest that have occurred in these observed locations produce insights that might demand near real-time responses from the system. In this context, the event processing depends on data freshness, and computation time, otherwise, the processing results and activities become less valuable or even worthless. An encouraging plan to support the computational demand for latency-sensitive applications of largely geo-distributed systems is applying Edge Computing resources to perform the video stream processing stages. However, some of these stages use deep learning methods for the detection and identification of objects of interest, which are voracious consumers of computational resources. To address these issues, this work proposes an architecture to distribute the video stream processing stages in multiple tasks running on different edge nodes, reducing network overhead and consequent delays. The Multilevel Information Fusion Edge Architecture (MELINDA) encapsulates the data analytics algorithms provided by machine learning methods in different types of processing tasks organized by multiple data-abstraction levels. This distribution strategy, combined with the new category of Edge AI hardware specifically designed to develop smart systems, is a promising approach to address the resource limitations of edge devices.}
        }
    
  30.  Open Access 

    Towards Adaptive Actors for Scalable IoT Applications at the Edge

    Jonathan Fürst, Mauricio Fadel Argerich, Kaifei Chen, Ernö Kovacs

    Open Journal of Internet Of Things (OJIOT), 4(1), Pages 70-86, 2018, Downloads: 4494, Citations: 7

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018080519303887853107 | GNL-LP: 1163928518 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Traditional device-cloud architectures are not scalable to the size of future IoT deployments. While edge and fog-computing principles seem like a tangible solution, they increase the programming effort of IoT systems, do not provide the same elasticity guarantees as the cloud and are of much greater hardware heterogeneity. Future IoT applications will be highly distributed and place their computational tasks on any combination of end-devices (sensor nodes, smartphones, drones), edge and cloud resources in order to achieve their application goals. These complex distributed systems require a programming model that allows developers to implement their applications in a simple way (i.e., focus on the application logic) and an execution framework that runs these applications resiliently with a high resource efficiency, while maximizing application utility. Towards such distributed execution runtime, we propose Nandu, an actor based system that adapts and migrates tasks dynamically using developer provided hints as seed information. Nandu allows developers to focus on sequential application logic and transforms their application into distributed, adaptive actors. The resulting actors support fine-grained entry points for the execution environment. These entry points allow local schedulers to adapt actors seamlessly to the current context, while optimizing the overall application utility according to developer provided requirements.

    BibTex:

        @Article{OJIOT_2018v4i1n06_Fuerst,
            title     = {Towards Adaptive Actors for Scalable IoT Applications at the Edge},
            author    = {Jonathan F{\"u}rst and
                         Mauricio Fadel Argerich and
                         Kaifei Chen and
                         Ern{\"o} Kovacs},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {70--86},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018080519303887853107},
            urn       = {urn:nbn:de:101:1-2018080519303887853107},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Traditional device-cloud architectures are not scalable to the size of future IoT deployments. While edge and fog-computing principles seem like a tangible solution, they increase the programming effort of IoT systems, do not provide the same elasticity guarantees as the cloud and are of much greater hardware heterogeneity. Future IoT applications will be highly distributed and place their computational tasks on any combination of end-devices (sensor nodes, smartphones, drones), edge and cloud resources in order to achieve their application goals. These complex distributed systems require a programming model that allows developers to implement their applications in a simple way (i.e., focus on the application logic) and an execution framework that runs these applications resiliently with a high resource efficiency, while maximizing application utility. Towards such distributed execution runtime, we propose Nandu, an actor based system that adapts and migrates tasks dynamically using developer provided hints as seed information. Nandu allows developers to focus on sequential application logic and transforms their application into distributed, adaptive actors. The resulting actors support fine-grained entry points for the execution environment. These entry points allow local schedulers to adapt actors seamlessly to the current context, while optimizing the overall application utility according to developer provided requirements.}
        }
    
  31.  Open Access 

    Combining Process Guidance and Industrial Feedback for Successfully Deploying Big Data Projects

    Christophe Ponsard, Mounir Touzani, Annick Majchrowski

    Open Journal of Big Data (OJBD), 3(1), Pages 26-41, 2017, Downloads: 5545, Citations: 7

    Full-Text: pdf | URN: urn:nbn:de:101:1-201712245446 | GNL-LP: 1149497165 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Companies are faced with the challenge of handling increasing amounts of digital data to run or improve their business. Although a large set of technical solutions are available to manage such Big Data, many companies lack the maturity to manage that kind of projects, which results in a high failure rate. This paper aims at providing better process guidance for a successful deployment of Big Data projects. Our approach is based on the combination of a set of methodological bricks documented in the literature from early data mining projects to nowadays. It is complemented by learned lessons from pilots conducted in different areas (IT, health, space, food industry) with a focus on two pilots giving a concrete vision of how to drive the implementation with emphasis on the identification of values, the definition of a relevant strategy, the use of an Agile follow-up and a progressive rise in maturity.

    BibTex:

        @Article{OJBD_2017v3i1n02_Ponsard,
            title     = {Combining Process Guidance and Industrial Feedback for Successfully Deploying Big Data Projects},
            author    = {Christophe Ponsard and
                         Mounir Touzani and
                         Annick Majchrowski},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2017},
            volume    = {3},
            number    = {1},
            pages     = {26--41},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201712245446},
            urn       = {urn:nbn:de:101:1-201712245446},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Companies are faced with the challenge of handling increasing amounts of digital data to run or improve their business. Although a large set of technical solutions are available to manage such Big Data, many companies lack the maturity to manage that kind of projects, which results in a high failure rate. This paper aims at providing better process guidance for a successful deployment of Big Data projects. Our approach is based on the combination of a set of methodological bricks documented in the literature from early data mining projects to nowadays. It is complemented by learned lessons from pilots conducted in different areas (IT, health, space, food industry) with a focus on two pilots giving a concrete vision of how to drive the implementation with emphasis on the identification of values, the definition of a relevant strategy, the use of an Agile follow-up and a progressive rise in maturity.}
        }
    
  32.  Open Access 

    Fuzzy Color Space for Apparel Coordination

    Pakizar Shamoi, Atsushi Inoue, Hiroharu Kawanaka

    Open Journal of Information Systems (OJIS), 1(2), Pages 20-28, 2014, Downloads: 9452, Citations: 7

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194710 | GNL-LP: 1132360994 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Human perception of colors constitutes an important part in color theory. The applications of color science are truly omnipresent, and what impression colors make on human plays a vital role in them. In this paper, we offer the novel approach for color information representation and processing using fuzzy sets and logic theory, which is extremely useful in modeling human impressions. Specifically, we use fuzzy mathematics to partition the gamut of feasible colors in HSI color space based on standard linguistic tags. The proposed method can be useful in various image processing applications involving query processing. We demonstrate its effectivity in the implementation of a framework for the apparel online shopping coordination based on a color scheme. It deserves attention, since there is always some uncertainty inherent in the description of apparels.

    BibTex:

        @Article{OJIS_2014v1i2n02_Shamoi,
            title     = {Fuzzy Color Space for Apparel Coordination},
            author    = {Pakizar Shamoi and
                         Atsushi Inoue and
                         Hiroharu Kawanaka},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2014},
            volume    = {1},
            number    = {2},
            pages     = {20--28},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194710},
            urn       = {urn:nbn:de:101:1-201705194710},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Human perception of colors constitutes an important part in color theory. The applications of color science are truly omnipresent, and what impression colors make on human plays a vital role in them. In this paper, we offer the novel approach for color information representation and processing using fuzzy sets and logic theory, which is extremely useful in modeling human impressions. Specifically, we use fuzzy mathematics to partition the gamut of feasible colors in HSI color space based on standard linguistic tags. The proposed method can be useful in various image processing applications involving query processing. We demonstrate its effectivity in the implementation of a framework for the apparel online shopping coordination based on a color scheme. It deserves attention, since there is always some uncertainty inherent in the description of apparels.}
        }
    
  33.  Open Access 

    Semantic Caching Framework: An FPGA-Based Application for IoT Security Monitoring

    Laurent d'Orazio, Julien Lallet

    Open Journal of Internet Of Things (OJIOT), 4(1), Pages 150-157, 2018, Downloads: 3863, Citations: 7

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018080519321445601568 | GNL-LP: 116392864X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Security monitoring is one subdomain of cybersecurity which aims to guarantee the safety of systems, continuously monitoring unusual events. The development of Internet Of Things leads to huge amounts of information, being heterogeneous and requiring to be efficiently managed. Cloud Computing provides software and hardware resources for large scale data management. However, performances for sequences of on-line queries on long term historical data may be not compatible with the emergency security monitoring. This work aims to address this problem by proposing a semantic caching framework and its application to acceleration hardware with FPGA for fast- and accurate-enough logs processing for various data stores and execution engines.

    BibTex:

        @Article{OJIOT_2018v4i1n13_Orazio,
            title     = {Semantic Caching Framework: An FPGA-Based Application for IoT Security Monitoring},
            author    = {Laurent d'Orazio and
                         Julien Lallet},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {150--157},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018080519321445601568},
            urn       = {urn:nbn:de:101:1-2018080519321445601568},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Security monitoring is one subdomain of cybersecurity which aims to guarantee the safety of systems, continuously monitoring unusual events. The development of Internet Of Things leads to huge amounts of information, being heterogeneous and requiring to be efficiently managed. Cloud Computing provides software and hardware resources for large scale data management. However, performances for sequences of on-line queries on long term historical data may be not compatible with the emergency security monitoring. This work aims to address this problem by proposing a semantic caching framework and its application to acceleration hardware with FPGA for fast- and accurate-enough logs processing for various data stores and execution engines.}
        }
    
  34.  Open Access 

    Word Embeddings for Wine Recommender Systems Using Vocabularies of Experts and Consumers

    Christophe Cruz, Cyril Nguyen Van, Laurent Gautier

    Open Journal of Web Technologies (OJWT), 5(1), Pages 23-30, 2018, Downloads: 3733, Citations: 6

    Special Issue: Proceedings of the International Workshop on Web Data Processing & Reasoning (WDPAR 2018) in conjunction with the 41st German Conference on Artificial Intelligence (KI) in Berlin, Germany.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018093019302313586232 | GNL-LP: 1168144477 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: This vision paper proposes an approach to use the most advanced word embeddings techniques to bridge the gap between the discourses of experts and non-experts and more specifically the terminologies used by the twocommunities. Word embeddings makes it possible to find equivalent terms between experts and non-experts, byapproach the similarity between words or by revealing hidden semantic relations. Thus, these controlledvocabularies with these new semantic enrichments are exploited in a hybrid recommendation system incorporating content-based ontology and keyword-based ontology to obtain relevant wines recommendations regardless of the level of expertise of the end user. The major aim is to find a non-expert vocabulary from semantic rules to enrich the knowledge of the ontology and improve the indexing of the items (i.e. wine) and the recommendation process.

    BibTex:

        @Article{OJWT_2018v5i1n04_Cruz,
            title     = {Word Embeddings for Wine Recommender Systems Using Vocabularies of Experts and Consumers},
            author    = {Christophe Cruz and
                         Cyril Nguyen Van and
                         Laurent Gautier},
            journal   = {Open Journal of Web Technologies (OJWT)},
            issn      = {2199-188X},
            year      = {2018},
            volume    = {5},
            number    = {1},
            pages     = {23--30},
            note      = {Special Issue: Proceedings of the International Workshop on Web Data Processing \& Reasoning (WDPAR 2018) in conjunction with the 41st German Conference on Artificial Intelligence (KI) in Berlin, Germany.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018093019302313586232},
            urn       = {urn:nbn:de:101:1-2018093019302313586232},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {This vision paper proposes an approach to use the most advanced word embeddings techniques to bridge the gap between the discourses of experts and non-experts and more specifically the terminologies used by the twocommunities. Word embeddings makes it possible to find equivalent terms between experts and non-experts, byapproach the similarity between words or by revealing hidden semantic relations. Thus, these controlledvocabularies with these new semantic enrichments are exploited in a hybrid recommendation system incorporating content-based ontology and keyword-based ontology to obtain relevant wines recommendations regardless of the level of expertise of the end user. The major aim is to find a non-expert vocabulary from semantic rules to enrich the knowledge of the ontology and improve the indexing of the items (i.e. wine) and the recommendation process.}
        }
    
  35.  Open Access 

    Towards Knowledge Infusion for Robust and Transferable Machine Learning in IoT

    Jonathan Fürst, Mauricio Fadel Argerich, Bin Cheng, Ernö Kovacs

    Open Journal of Internet Of Things (OJIOT), 6(1), Pages 24-34, 2020, Downloads: 2850, Citations: 6

    Full-Text: pdf | URN: urn:nbn:de:101:1-2020080219333632380804 | GNL-LP: 1215016840 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Machine learning (ML) applications in Internet of Things (IoT) scenarios face the issue that supervision signals, such as labeled data, are scarce and expensive to obtain. For example, it often requires a human to manually label events in a data stream by observing the same events in the real world. In addition, the performance of trained models usually depends on a specific context: (1) location, (2) time and (3) data quality. This context is not static in reality, making it hard to achieve robust and transferable machine learning for IoT systems in practice. In this paper, we address these challenges with an envisioned method that we name Knowledge Infusion. First, we present two past case studies in which we combined external knowledge with traditional data-driven machine learning in IoT scenarios to ease the supervision effort: (1) a weak-supervision approach for the IoT domain to auto-generate labels based on external knowledge (e.g., domain knowledge) encoded in simple labeling functions. Our evaluation for transport mode classification achieves a micro-F1 score of 80.2%, with only seven labeling functions, on par with a fully supervised model that relies on hand-labeled data. (2) We introduce guiding functions to Reinforcement Learning (RL) to guide the agents' decisions and experience. In initial experiments, our guided reinforcement learning achieves more than three times higher reward in the beginning of its training than an agent with no external knowledge. We use the lessons learned from these experiences to develop our vision of knowledge infusion. In knowledge infusion, we aim to automate the inclusion of knowledge from existing knowledge bases and domain experts to combine it with traditional data-driven machine learning techniques during setup/training phase, but also during the execution phase.

    BibTex:

        @Article{OJIOT_2020v6i1n04_Fuerst,
            title     = {Towards Knowledge Infusion for Robust and Transferable Machine Learning in IoT},
            author    = {Jonathan F{\"u}rst and
                         Mauricio Fadel Argerich and
                         Bin Cheng and
                         Ern{\"o} Kovacs},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2020},
            volume    = {6},
            number    = {1},
            pages     = {24--34},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2020080219333632380804},
            urn       = {urn:nbn:de:101:1-2020080219333632380804},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Machine learning (ML) applications in Internet of Things (IoT) scenarios face the issue that supervision signals, such as labeled data, are scarce and expensive to obtain. For example, it often requires a human to manually label events in a data stream by observing the same events in the real world. In addition, the performance of trained models usually depends on a specific context: (1) location, (2) time and (3) data quality. This context is not static in reality, making it hard to achieve robust and transferable machine learning for IoT systems in practice. In this paper, we address these challenges with an envisioned method that we name Knowledge Infusion. First, we present two past case studies in which we combined external knowledge with traditional data-driven machine learning in IoT scenarios to ease the supervision effort: (1) a weak-supervision approach for the IoT domain to auto-generate labels based on external knowledge (e.g., domain knowledge) encoded in simple labeling functions. Our evaluation for transport mode classification achieves a micro-F1 score of 80.2\%, with only seven labeling functions, on par with a fully supervised model that relies on hand-labeled data. (2) We introduce guiding functions to Reinforcement Learning (RL) to guide the agents' decisions and experience. In initial experiments, our guided reinforcement learning achieves more than three times higher reward in the beginning of its training than an agent with no external knowledge. We use the lessons learned from these experiences to develop our vision of knowledge infusion. In knowledge infusion, we aim to automate the inclusion of knowledge from existing knowledge bases and domain experts to combine it with traditional data-driven machine learning techniques during setup/training phase, but also during the execution phase.}
        }
    
  36.  Open Access 

    Distributed Join Approaches for W3C-Conform SPARQL Endpoints

    Sven Groppe, Dennis Heinrich, Stefan Werner

    Open Journal of Semantic Web (OJSW), 2(1), Pages 30-52, 2015, Downloads: 11089, Citations: 6

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194910 | GNL-LP: 1132361303 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Presentation: Video

    Abstract: Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation.

    BibTex:

        @Article{OJSW_2015v2i1n04_Groppe,
            title     = {Distributed Join Approaches for W3C-Conform SPARQL Endpoints},
            author    = {Sven Groppe and
                         Dennis Heinrich and
                         Stefan Werner},
            journal   = {Open Journal of Semantic Web (OJSW)},
            issn      = {2199-336X},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {30--52},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194910},
            urn       = {urn:nbn:de:101:1-201705194910},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation.}
        }
    
  37.  Open Access 

    Towards a Large Scale IoT through Partnership, Incentive, and Services: A Vision, Architecture, and Future Directions

    Gowri Sankar Ramachandran, Bhaskar Krishnamachari

    Open Journal of Internet Of Things (OJIOT), 5(1), Pages 80-92, 2019, Downloads: 3100, Citations: 6

    Full-Text: pdf | URN: urn:nbn:de:101:1-2019092919345869785889 | GNL-LP: 1195986327 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Internet of Things applications has been deployed and managed in a small to a medium scale deployments in industries and small segments of cities in the last decade. These real-world deployments not only helped the researchers and application developers to create protocols, standards, and frameworks but also helped them understand the challenges associated with the maintenance and management of IoT deployments in all kinds of operational environments. Despite the technological advancements and the deployment experiences, the technology failed to create a notable momentum towards large scale IoT applications involving thousands of IoT devices. We argue the reasons behind the lack of large scale deployments and the limitations of contemporary IoT deployment model. In addition, we present an approach involving multiple stakeholders as a means to scale IoT applications to hundreds of devices. Besides, we argue that the partnership, incentive mechanisms, privacy, and security frameworks are the critical factors for large scale IoT deployments of the future.

    BibTex:

        @Article{OJIOT_2019v5i1n07_Ramachandran,
            title     = {Towards a Large Scale IoT through Partnership, Incentive, and Services: A Vision, Architecture, and Future Directions},
            author    = {Gowri Sankar Ramachandran and
                         Bhaskar Krishnamachari},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2019},
            volume    = {5},
            number    = {1},
            pages     = {80--92},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2019092919345869785889},
            urn       = {urn:nbn:de:101:1-2019092919345869785889},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Internet of Things applications has been deployed and managed in a small to a medium scale deployments in industries and small segments of cities in the last decade. These real-world deployments not only helped the researchers and application developers to create protocols, standards, and frameworks but also helped them understand the challenges associated with the maintenance and management of IoT deployments in all kinds of operational environments. Despite the technological advancements and the deployment experiences, the technology failed to create a notable momentum towards large scale IoT applications involving thousands of IoT devices. We argue the reasons behind the lack of large scale deployments and the limitations of contemporary IoT deployment model. In addition, we present an approach involving multiple stakeholders as a means to scale IoT applications to hundreds of devices. Besides, we argue that the partnership, incentive mechanisms, privacy, and security frameworks are the critical factors for large scale IoT deployments of the future.}
        }
    
  38.  Open Access 

    NebulaStream: Complex Analytics Beyond the Cloud

    Steffen Zeuch, Eleni Tzirita Zacharatou, Shuhao Zhang, Xenofon Chatziliadis, Ankit Chaudhary, Bonaventura Del Monte, Dimitrios Giouroukis, Philipp M. Grulich, Ariane Ziehn, Volker Mark

    Open Journal of Internet Of Things (OJIOT), 6(1), Pages 66-81, 2020, Downloads: 4141, Citations: 6

    Full-Text: pdf | URN: urn:nbn:de:101:1-2020080219335991237696 | GNL-LP: 1215016891 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The arising Internet of Things (IoT) will require significant changes to current stream processing engines (SPEs) to enable large-scale IoT applications. In this paper, we present challenges and opportunities for an IoT data management system to enable complex analytics beyond the cloud. As one of the most important upcoming IoT applications, we focus on the vision of a smart city. The goal of this paper is to bridge the gap between the requirements of upcoming IoT applications and the supported features of an IoT data management system. To this end, we outline how state-of-the-art SPEs have to change to exploit the new capabilities of the IoT and showcase how we tackle IoT challenges in our own system, NebulaStream. This paper lays the foundation for a new type of systems that leverages the IoT to enable large-scale applications over millions of IoT devices in highly dynamic and geo-distributed environments.

    BibTex:

        @Article{OJIOT_2020v6i1n07_Zeuch,
            title     = {NebulaStream: Complex Analytics Beyond the Cloud},
            author    = {Steffen Zeuch and
                         Eleni Tzirita Zacharatou and
                         Shuhao Zhang and
                         Xenofon Chatziliadis and
                         Ankit Chaudhary and
                         Bonaventura Del Monte and
                         Dimitrios Giouroukis and
                         Philipp M. Grulich and
                         Ariane Ziehn and
                         Volker Mark},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2020},
            volume    = {6},
            number    = {1},
            pages     = {66--81},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2020080219335991237696},
            urn       = {urn:nbn:de:101:1-2020080219335991237696},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The arising Internet of Things (IoT) will require significant changes to current stream processing engines (SPEs) to enable large-scale IoT applications. In this paper, we present challenges and opportunities for an IoT data management system to enable complex analytics beyond the cloud. As one of the most important upcoming IoT applications, we focus on the vision of a smart city. The goal of this paper is to bridge the gap between the requirements of upcoming IoT applications and the supported features of an IoT data management system. To this end, we outline how state-of-the-art SPEs have to change to exploit the new capabilities of the IoT and showcase how we tackle IoT challenges in our own system, NebulaStream. This paper lays the foundation for a new type of systems that leverages the IoT to enable large-scale applications over millions of IoT devices in highly dynamic and geo-distributed environments.}
        }
    
  39.  Open Access 

    Pattern-sensitive Time-series Anonymization and its Application to Energy-Consumption Data

    Stephan Kessler, Erik Buchmann, Thorben Burghardt, Klemens Böhm

    Open Journal of Information Systems (OJIS), 1(1), Pages 3-22, 2014, Downloads: 13268, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194696 | GNL-LP: 113236096X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Time series anonymization is an important problem. One prominent example of time series are energy consumption records, which might reveal details of the daily routine of a household. Existing privacy approaches for time series, e.g., from the field of trajectory anonymization, assume that every single value of a time series contains sensitive information and reduce the data quality very much. In contrast, we consider time series where it is combinations of tuples that represent personal information. We propose (n; l; k)-anonymity, geared to anonymization of time-series data with minimal information loss, assuming that an adversary may learn a few data points. We propose several heuristics to obtain (n; l; k)-anonymity, and we evaluate our approach both with synthetic and real data. Our experiments confirm that it is sufficient to modify time series only moderately in order to fulfill meaningful privacy requirements.

    BibTex:

        @Article{OJIS-v1i1n02_Kessler,
            title     = {Pattern-sensitive Time-series Anonymization and its Application to Energy-Consumption Data},
            author    = {Stephan Kessler and
                         Erik Buchmann and
                         Thorben Burghardt and
                         Klemens B{\"o}hm},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2014},
            volume    = {1},
            number    = {1},
            pages     = {3--22},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194696},
            urn       = {urn:nbn:de:101:1-201705194696},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Time series anonymization is an important problem. One prominent example of time series are energy consumption records, which might reveal details of the daily routine of a household. Existing privacy approaches for time series, e.g., from the field of trajectory anonymization, assume that every single value of a time series contains sensitive information and reduce the data quality very much. In contrast, we consider time series where it is combinations of tuples that represent personal information. We propose (n; l; k)-anonymity, geared to anonymization of time-series data with minimal information loss, assuming that an adversary may learn a few data points. We propose several heuristics to obtain (n; l; k)-anonymity, and we evaluate our approach both with synthetic and real data. Our experiments confirm that it is sufficient to modify time series only moderately in order to fulfill meaningful privacy requirements.}
        }
    
  40.  Open Access 

    Ontology-Based Data Access to Big Data

    Simon Schiff, Ralf Möller, Özgür L. Özcep

    Open Journal of Databases (OJDB), 6(1), Pages 21-32, 2019, Downloads: 11202, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018122318334350985847 | GNL-LP: 1174122730 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Recent approaches to ontology-based data access (OBDA) have extended the focus from relational database systems to other types of backends such as cluster frameworks in order to cope with the four Vs associated with big data: volume, veracity, variety and velocity (stream processing). The abstraction that an ontology provides is a benefit from the enduser point of view, but it represents a challenge for developers because high-level queries must be transformed into queries executable on the backend level. In this paper, we discuss and evaluate an OBDA system that uses STARQL (Streaming and Temporal ontology Access with a Reasoning-based Query Language), as a high-level query language to access data stored in a SPARK cluster framework. The development of the STARQL-SPARK engine show that there is a need to provide a homogeneous interface to access both static and temporal as well as streaming data because cluster frameworks usually lack such an interface. The experimental evaluation shows that building a scalable OBDA system that runs with SPARK is more than plug-and-play as one needs to know quite well the data formats and the data organisation in the cluster framework.

    BibTex:

        @Article{OJDB_2019v6i1n03_Schiff,
            title     = {Ontology-Based Data Access to Big Data},
            author    = {Simon Schiff and
                         Ralf M{\"o}ller and
                         {\"O}zg{\"u}r L. {\"O}zcep},
            journal   = {Open Journal of Databases (OJDB)},
            issn      = {2199-3459},
            year      = {2019},
            volume    = {6},
            number    = {1},
            pages     = {21--32},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018122318334350985847},
            urn       = {urn:nbn:de:101:1-2018122318334350985847},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Recent approaches to ontology-based data access (OBDA) have extended the focus from relational database systems to other types of backends such as cluster frameworks in order to cope with the four Vs associated with big data: volume, veracity, variety and velocity (stream processing). The abstraction that an ontology provides is a benefit from the enduser point of view, but it represents a challenge for developers because high-level queries must be transformed into queries executable on the backend level. In this paper, we discuss and evaluate an OBDA system that uses STARQL (Streaming and Temporal ontology Access with a Reasoning-based Query Language), as a high-level query language to access data stored in a SPARK cluster framework. The development of the STARQL-SPARK engine show that there is a need to provide a homogeneous interface to access both static and temporal as well as streaming data because cluster frameworks usually lack such an interface. The experimental evaluation shows that building a scalable OBDA system that runs with SPARK is more than plug-and-play as one needs to know quite well the data formats and the data organisation in the cluster framework.}
        }
    
  41.  Open Access 

    An Analytical Model of Multi-Core Multi-Cluster Architecture (MCMCA)

    Norhazlina Hamid, Robert John Walters, Gary B. Wills

    Open Journal of Cloud Computing (OJCC), 2(1), Pages 4-15, 2015, Downloads: 10939, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194487 | GNL-LP: 1132360692 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Multi-core clusters have emerged as an important contribution in computing technology for provisioning additional processing power in high performance computing and communications. Multi-core architectures are proposed for their capability to provide higher performance without increasing heat and power usage, which is the main concern in a single-core processor. This paper introduces analytical models of a new architecture for large-scale multi-core clusters to improve the communication performance within the interconnection network. The new architecture will be based on a multi - cluster architecture containing clusters of multi-core processors.

    BibTex:

        @Article{OJCC_2015v2i1n02_Hamid,
            title     = {An Analytical Model of Multi-Core Multi-Cluster Architecture (MCMCA)},
            author    = {Norhazlina Hamid and
                         Robert John Walters and
                         Gary B. Wills},
            journal   = {Open Journal of Cloud Computing (OJCC)},
            issn      = {2199-1987},
            year      = {2015},
            volume    = {2},
            number    = {1},
            pages     = {4--15},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194487},
            urn       = {urn:nbn:de:101:1-201705194487},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Multi-core clusters have emerged as an important contribution in computing technology for provisioning additional processing power in high performance computing and communications. Multi-core architectures are proposed for their capability to provide higher performance without increasing heat and power usage, which is the main concern in a single-core processor. This paper introduces analytical models of a new architecture for large-scale multi-core clusters to improve the communication performance within the interconnection network. The new architecture will be based on a multi - cluster architecture containing clusters of multi-core processors.}
        }
    
  42.  Open Access 

    Dynamic Allocation of Smart City Applications

    Igor Miladinovic, Sigrid Schefer-Wenzl

    Open Journal of Internet Of Things (OJIOT), 4(1), Pages 144-149, 2018, Downloads: 3416, Citations: 5

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018080519320192483088 | GNL-LP: 1163928623 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Cities around the world are evaluating the potential of Internet of Things (IoT) to automate and optimize public services. Cities that implement this approach are commonly referred to as smart cities. A smart city IoT architecture needs to be layered and scalable in order to fulfill not only today's but also future needs of smart cities. Network Function Virtualization (NFV) provides the scale and flexibility necessary for smart city services by enabling the automated control, management and orchestration of network resources. In this paper we consider a scalable, layered, NFV based smart city architecture and discuss the optimal location of applications regarding cloud computing and mobile edge computing (MEC). Introducing a novel concept of dynamic application allocation we show how to fully benefit from MEC and present relevant decision criteria.

    BibTex:

        @Article{OJIOT_2018v4i1n12_Miladinovic,
            title     = {Dynamic Allocation of Smart City Applications},
            author    = {Igor Miladinovic and
                         Sigrid Schefer-Wenzl},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {144--149},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018080519320192483088},
            urn       = {urn:nbn:de:101:1-2018080519320192483088},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Cities around the world are evaluating the potential of Internet of Things (IoT) to automate and optimize public services. Cities that implement this approach are commonly referred to as smart cities. A smart city IoT architecture needs to be layered and scalable in order to fulfill not only today's but also future needs of smart cities. Network Function Virtualization (NFV) provides the scale and flexibility necessary for smart city services by enabling the automated control, management and orchestration of network resources. In this paper we consider a scalable, layered, NFV based smart city architecture and discuss the optimal location of applications regarding cloud computing and mobile edge computing (MEC). Introducing a novel concept of dynamic application allocation we show how to fully benefit from MEC and present relevant decision criteria.}
        }
    
  43.  Open Access 

    Operation of Modular Smart Grid Applications Interacting through a Distributed Middleware

    Stephan Cejka, Albin Frischenschlager, Mario Faschang, Mark Stefan, Konrad Diwold

    Open Journal of Big Data (OJBD), 4(1), Pages 14-29, 2018, Downloads: 5497, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201801212419 | GNL-LP: 1151046426 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: IoT-functionality can broaden the scope of distribution system automation in terms of functionality and communication. However, it also poses risks regarding resource consumption and security. This article presents a field approved IoT-enabled smart grid middleware, which allows for flexible deployment and management of applications within smart grid operation. In the first part of the work, the resource consumption of the middleware is analyzed and current memory bottlenecks are identified. The bottlenecks can be resolved by introducing a new entity that allows to dynamically load multiple applications within one JVM. The performance was experimentally tested and the results suggest that its application can significantly reduce the applications' memory footprint on the physical device. The second part of the study identifies and discusses potential security threats, with a focus on attacks stemming from malicious software applications within the framework. In order to prevent such attacks a proxy based prevention mechanism is developed and demonstrated.

    BibTex:

        @Article{OJBD_2018v4i1n02_Cejka,
            title     = {Operation of Modular Smart Grid Applications Interacting through a Distributed Middleware},
            author    = {Stephan Cejka and
                         Albin Frischenschlager and
                         Mario Faschang and
                         Mark Stefan and
                         Konrad Diwold},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {14--29},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201801212419},
            urn       = {urn:nbn:de:101:1-201801212419},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {IoT-functionality can broaden the scope of distribution system automation in terms of functionality and communication. However, it also poses risks regarding resource consumption and security. This article presents a field approved IoT-enabled smart grid middleware, which allows for flexible deployment and management of applications within smart grid operation. In the first part of the work, the resource consumption of the middleware is analyzed and current memory bottlenecks are identified. The bottlenecks can be resolved by introducing a new entity that allows to dynamically load multiple applications within one JVM. The performance was experimentally tested and the results suggest that its application can significantly reduce the applications' memory footprint on the physical device. The second part of the study identifies and discusses potential security threats, with a focus on attacks stemming from malicious software applications within the framework. In order to prevent such attacks a proxy based prevention mechanism is developed and demonstrated.}
        }
    
  44.  Open Access 

    Criteria of Successful IT Projects from Management's Perspective

    Mark Harwardt

    Open Journal of Information Systems (OJIS), 3(1), Pages 29-54, 2016, Downloads: 18768, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194797 | GNL-LP: 1132361133 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even.

    BibTex:

        @Article{OJIS_2016v3i1n02_Harwardt,
            title     = {Criteria of Successful IT Projects from Management's Perspective},
            author    = {Mark Harwardt},
            journal   = {Open Journal of Information Systems (OJIS)},
            issn      = {2198-9281},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {29--54},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194797},
            urn       = {urn:nbn:de:101:1-201705194797},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even.}
        }
    
  45.  Open Access 

    Middleware Support for Generic Actuation in the Internet of Mobile Things

    Sheriton Valim, Matheus Zeitune, Bruno Olivieri, Markus Endler

    Open Journal of Internet Of Things (OJIOT), 4(1), Pages 24-34, 2018, Downloads: 3120, Citations: 5

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2018080519322337232186 | GNL-LP: 1163928666 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: As the Internet of Things is expanding towards applications in almost any sector of our economy and daily life, so is the demand of employing and integrating devices with actuation capabilities, such as smart bulbs, HVAC,smart locks, industrial machines, robots or drones. Many middleware platforms have been developed in orderto support the development of distributed IoT applications and facilitate the sensors-to-cloud communication andedge processing capabilities, but surprisingly very little has been done to provide middleware-level, support andgeneric mechanisms for discovering the devices and their interfaces, and executing the actuation commands, i.e.transferring them to the device. In this paper, we present a generic support for actuation as an extension ofContextNet, our mobile-cloud middleware for IoMT. We describe the design of the distributed actuation supportand present a proof of working implementation that enables remote control of a Sphero mobile BB-8 toy.

    BibTex:

        @Article{OJIOT_2018v4i1n03_Valim,
            title     = {Middleware Support for Generic Actuation in the Internet of Mobile Things},
            author    = {Sheriton Valim and
                         Matheus Zeitune and
                         Bruno Olivieri and
                         Markus Endler},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2018},
            volume    = {4},
            number    = {1},
            pages     = {24--34},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018080519322337232186},
            urn       = {urn:nbn:de:101:1-2018080519322337232186},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {As the Internet of Things is expanding towards applications in almost any sector of our economy and daily life, so is the demand of employing and integrating devices with actuation capabilities, such as smart bulbs, HVAC,smart locks, industrial machines, robots or drones. Many middleware platforms have been developed in orderto support the development of distributed IoT applications and facilitate the sensors-to-cloud communication andedge processing capabilities, but surprisingly very little has been done to provide middleware-level, support andgeneric mechanisms for discovering the devices and their interfaces, and executing the actuation commands, i.e.transferring them to the device. In this paper, we present a generic support for actuation as an extension ofContextNet, our mobile-cloud middleware for IoMT. We describe the design of the distributed actuation supportand present a proof of working implementation that enables remote control of a Sphero mobile BB-8 toy.}
        }
    
  46.  Open Access 

    A Semantic Question Answering Framework for Large Data Sets

    Marta Tatu, Mithun Balakrishna, Steven Werner, Tatiana Erekhinskaya, Dan Moldovan

    Open Journal of Semantic Web (OJSW), 3(1), Pages 16-31, 2016, Downloads: 13116, Citations: 5

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194921 | GNL-LP: 1132361338 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.

    BibTex:

        @Article{OJSW_2016v3i1n02_Tatu,
            title     = {A Semantic Question Answering Framework for Large Data Sets},
            author    = {Marta Tatu and
                         Mithun Balakrishna and
                         Steven Werner and
                         Tatiana Erekhinskaya and
                         Dan Moldovan},
            journal   = {Open Journal of Semantic Web (OJSW)},
            issn      = {2199-336X},
            year      = {2016},
            volume    = {3},
            number    = {1},
            pages     = {16--31},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194921},
            urn       = {urn:nbn:de:101:1-201705194921},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.}
        }
    
  47.  Open Access 

    Multi-Layer Cross Domain Reasoning over Distributed Autonomous IoT Applications

    Muhammad Intizar Ali, Pankesh Patel, Soumya Kanti Datta, Amelie Gyrard

    Open Journal of Internet Of Things (OJIOT), 3(1), Pages 75-90, 2017, Downloads: 7404, Citations: 5

    Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613451 | GNL-LP: 1137820195 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Due to the rapid advancements in the sensor technologies and IoT, we are witnessing a rapid growth in the use of sensors and relevant IoT applications. A very large number of sensors and IoT devices are in place in our surroundings which keep sensing dynamic contextual information. A true potential of the wide-spread of IoT devices can only be realized by designing and deploying a large number of smart IoT applications which can provide insights on the data collected from IoT devices and support decision making by converting raw sensor data into actionable knowledge. However, the process of getting value from sensor data streams and converting these raw sensor values into actionable knowledge requires extensive efforts from IoT application developers and domain experts. In this paper, our main aim is to propose a multi-layer cross domain reasoning framework, which can support application developers, end-users and domain experts to automatically understand relevant events and extract actionable knowledge with minimal efforts. Our framework reduces the efforts required for IoT applications development (i) by supporting automated application code generation and access mechanisms using IoTSuite, (ii) by leveraging from Machine-to-Machine Measurement (M3) framework to exploit semantic technologies and domain knowledge, and (iii) by using automated sensor discovery and complex event processing of relevant events (ACEIS Middleware) at the multiple data processing layers and different stages of the IoT application development life cycle. In the essence, our framework supports the end-users and IoT application developers to design innovative IoT applications by reducing the programming efforts, by identifying relevant events and by suggesting potential actions based on complex event processing and reasoning for cross-domain IoT applications.

    BibTex:

        @Article{OJIOT_2017v3i1n07_Ali,
            title     = {Multi-Layer Cross Domain Reasoning over Distributed Autonomous IoT Applications},
            author    = {Muhammad Intizar Ali and
                         Pankesh Patel and
                         Soumya Kanti Datta and
                         Amelie Gyrard},
            journal   = {Open Journal of Internet Of Things (OJIOT)},
            issn      = {2364-7108},
            year      = {2017},
            volume    = {3},
            number    = {1},
            pages     = {75--90},
            note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613451},
            urn       = {urn:nbn:de:101:1-2017080613451},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Due to the rapid advancements in the sensor technologies and IoT, we are witnessing a rapid growth in the use of sensors and relevant IoT applications. A very large number of sensors and IoT devices are in place in our surroundings which keep sensing dynamic contextual information. A true potential of the wide-spread of IoT devices can only be realized by designing and deploying a large number of smart IoT applications which can provide insights on the data collected from IoT devices and support decision making by converting raw sensor data into actionable knowledge. However, the process of getting value from sensor data streams and converting these raw sensor values into actionable knowledge requires extensive efforts from IoT application developers and domain experts. In this paper, our main aim is to propose a multi-layer cross domain reasoning framework, which can support application developers, end-users and domain experts to automatically understand relevant events and extract actionable knowledge with minimal efforts. Our framework reduces the efforts required for IoT applications development (i) by supporting automated application code generation and access mechanisms using IoTSuite, (ii) by leveraging from Machine-to-Machine Measurement (M3) framework to exploit semantic technologies and domain knowledge, and (iii) by using automated sensor discovery and complex event processing of relevant events (ACEIS Middleware) at the multiple data processing layers and different stages of the IoT application development life cycle. In the essence, our framework supports the end-users and IoT application developers to design innovative IoT applications by reducing the programming efforts, by identifying relevant events and by suggesting potential actions based on complex event processing and reasoning for cross-domain IoT applications.}
        }
    
  48.  Open Access 

    The mf-index: A Citation-Based Multiple Factor Index to Evaluate and Compare the Output of Scientists

    Eric Oberesch, Sven Groppe

    Open Journal of Web Technologies (OJWT), 4(1), Pages 1-32, 2017, Downloads: 4472, Citations: 4

    Full-Text: pdf | URN: urn:nbn:de:101:1-2017070914565 | GNL-LP: 1136555501 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Comparing the output of scientists as objective as possible is an important factor for, e.g., the approval of research funds or the filling of open positions at universities. Numeric indices, which express the scientific output in the form of a concrete value, may not completely supersede an overall view of a researcher, but provide helpful indications for the assessment. This work introduces the most important citation-based indices, analyzes their advantages and disadvantages and provides an overview of the aspects considered by them. On this basis, we identify the criteria that an advanced index should fulfill, and develop a new index, the mf-index. The objective of the mf-index is to combine the benefits of the existing indices, while avoiding as far as possible their drawbacks and to consider additional aspects. Finally, an evaluation based on data of real publications and citations compares the mf-index with existing indices and verifies that its advantages in theory can also be determined in practice.

    BibTex:

        @Article{OJWT_2017v4i1n01_Oberesch,
            title     = {The mf-index: A Citation-Based Multiple Factor Index to Evaluate and Compare the Output of Scientists},
            author    = {Eric Oberesch and
                         Sven Groppe},
            journal   = {Open Journal of Web Technologies (OJWT)},
            issn      = {2199-188X},
            year      = {2017},
            volume    = {4},
            number    = {1},
            pages     = {1--32},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017070914565},
            urn       = {urn:nbn:de:101:1-2017070914565},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Comparing the output of scientists as objective as possible is an important factor for, e.g., the approval of research funds or the filling of open positions at universities. Numeric indices, which express the scientific output in the form of a concrete value, may not completely supersede an overall view of a researcher, but provide helpful indications for the assessment. This work introduces the most important citation-based indices, analyzes their advantages and disadvantages and provides an overview of the aspects considered by them. On this basis, we identify the criteria that an advanced index should fulfill, and develop a new index, the mf-index. The objective of the mf-index is to combine the benefits of the existing indices, while avoiding as far as possible their drawbacks and to consider additional aspects. Finally, an evaluation based on data of real publications and citations compares the mf-index with existing indices and verifies that its advantages in theory can also be determined in practice.}
        }
    
  49.  Open Access 

    Statistical Machine Learning in Brain State Classification using EEG Data

    Yuezhe Li, Yuchou Chang, Hong Lin

    Open Journal of Big Data (OJBD), 1(2), Pages 19-33, 2015, Downloads: 10621, Citations: 4

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194354 | GNL-LP: 113236051X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: In this article, we discuss how to use a variety of machine learning methods, e.g. tree bagging, random forest, boost, support vector machine, and Gaussian mixture model, for building classifiers for electroencephalogram (EEG) data, which is collected from different brain states on different subjects. Also, we discuss how training data size influences misclassification rate. Moreover, the number of subjects that contributes to the training data affects misclassification rate. Furthermore, we discuss how sample entropy contributes to building a classifier. Our results show that classification based on sample entropy give the smallest misclassification rate. Moreover, two data sets were collected from one channel and seven channels respectively. The classification results of each data set show that the more channels we use, the less misclassification we have. Our results show that it is promising to build a self-adaptive classification system by using EEG data to distinguish idle from active state.

    BibTex:

        @Article{OJBD_2015v1i2n03_YuehzeLi,
            title     = {Statistical Machine Learning in Brain State Classification using EEG Data},
            author    = {Yuezhe Li and
                         Yuchou Chang and
                         Hong Lin},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {19--33},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194354},
            urn       = {urn:nbn:de:101:1-201705194354},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {In this article, we discuss how to use a variety of machine learning methods, e.g. tree bagging, random forest, boost, support vector machine, and Gaussian mixture model, for building classifiers for electroencephalogram (EEG) data, which is collected from different brain states on different subjects. Also, we discuss how training data size influences misclassification rate. Moreover, the number of subjects that contributes to the training data affects misclassification rate. Furthermore, we discuss how sample entropy contributes to building a classifier. Our results show that classification based on sample entropy give the smallest misclassification rate. Moreover, two data sets were collected from one channel and seven channels respectively. The classification results of each data set show that the more channels we use, the less misclassification we have. Our results show that it is promising to build a self-adaptive classification system by using EEG data to distinguish idle from active state.}
        }
    
  50.  Open Access 

    Data Transfers in Hadoop: A Comparative Study

    Ujjal Marjit, Kumar Sharma, Puspendu Mandal

    Open Journal of Big Data (OJBD), 1(2), Pages 34-46, 2015, Downloads: 13344, Citations: 4

    Full-Text: pdf | URN: urn:nbn:de:101:1-201705194373 | GNL-LP: 1132360536 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

    Abstract: Hadoop is an open source framework for processing large amounts of data in distributed computing environment. It plays an important role in processing and analyzing the Big Data. This framework is used for storing data on large clusters of commodity hardware. Data input and output to and from Hadoop is an indispensable action for any data processing job. At present, many tools have been evolved for importing and exporting Data in Hadoop. In this article, some commonly used tools for importing and exporting data have been emphasized. Moreover, a state-of-the-art comparative study among the various tools has been made. With this study, it has been decided that where to use one tool over the other with emphasis on the data transfer to and from Hadoop system. This article also discusses about how Hadoop handles backup and disaster recovery along with some open research questions in terms of Big Data transfer when dealing with cloud-based services.

    BibTex:

        @Article{OJBD_2015v1i2n04_UjjalMarjit,
            title     = {Data Transfers in Hadoop: A Comparative Study},
            author    = {Ujjal Marjit and
                         Kumar Sharma and
                         Puspendu Mandal},
            journal   = {Open Journal of Big Data (OJBD)},
            issn      = {2365-029X},
            year      = {2015},
            volume    = {1},
            number    = {2},
            pages     = {34--46},
            url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194373},
            urn       = {urn:nbn:de:101:1-201705194373},
            publisher = {RonPub},
            bibsource = {RonPub},
            abstract = {Hadoop is an open source framework for processing large amounts of data in distributed computing environment. It plays an important role in processing and analyzing the Big Data. This framework is used for storing data on large clusters of commodity hardware. Data input and output to and from Hadoop is an indispensable action for any data processing job. At present, many tools have been evolved for importing and exporting Data in Hadoop. In this article, some commonly used tools for importing and exporting data have been emphasized. Moreover, a state-of-the-art comparative study among the various tools has been made. With this study, it has been decided that where to use one tool over the other with emphasis on the data transfer to and from Hadoop system. This article also discusses about how Hadoop handles backup and disaster recovery along with some open research questions in terms of Big Data transfer when dealing with cloud-based services.}
        }