A Smarter Commerce Business Model based on Personal Context Information
Norha M. Villegas and Hausi A. Müller (University of Victoria, Canada)
nvillega@cs.uvic.ca

User interactions in on-line shopping are valuable sources of context information useful to enhance shopping experiences. SmarterContext, a novel context manager that discovers meaningful context from user web interactions, has great potential to develop new business models that benefit buyers, retailers and cloud infrastructure providers in the smarter commerce realm.

User-Controlled Privacy and Security for Personal Context Spheres in the Smart Internet
Juan C. Muñoz and Gabriel Tamura (Icesi University Cali, Valle del Cauca, Colombia);
Norha M. Villegas and Hausi A. Müller (University of Victoria, Canada)

A critical aspect for the Smart Internet success is the adoption of mechanisms to protect users’ sensitive information in web interactions. Surprise is our solution to empower users with privacy and data security control to access, transport and store their sensitive information, in personal context repositories, through the SmarterContext infrastructure.

Smart Interactive Streaming Applications
Przemek Lach, Ron Desmarais, Pratik Jain, and Hausi A. Müller
(University of Victoria, Canada)
przemek@przemeklach.com

Today consumers expect a high degree of customization of streamed media. One such innovative technology is interactive streaming. These applications posit formidable engineering challenges. We showcase innovative smart applications to manage video streams using situational context. Our implementation is based on OpenFlow, HTML5, QR codes, video analysis, and mobile platforms.

Billboard — A Contextual Social Messaging System
Przemek Lach, Ron Desmarais, Pratik Jain, Hausi A. Müller
(University of Victoria, Canada)
ron.desmarais@gmail.com

Billboard is a public social messaging system. Users can view and publish information based on their location to a publicly viewable billboard. The billboards are contextual in that they display local information on the environment. Environmental information includes the users who are near or have an interest in the billboard along with local information such as current events relevant to the geographical area and localized advertisements.

Gamifying Collaborative Decision Making
Mohammad Ali Moradian, Kelly A. Lyons, and Maaz Nasir
(University of Toronto, Canada);
Rock Leung
(SAP)
moradian.ali@gmail.com

Engaging people to participate fully in collaborative decisionā€making activity in a software application can be challenging. People are very busy, juggling competing demands for their time. Gamification has been used in a variety of environments to incent participation and increase participation (Thom, 2012). In this project, we investigate how gamification can be used to incent and motivate people to participate in and contribute to collaborative decisionā€making activities.

Feedback Loops for Model-Based Adaptive DoS Attack Mitigation
Cornel Barna, Mark Shtern, Mike Smit, Vassilios Tzerpos, and Marin Litoiu
(York University, Canada)
msmit@cs.ualberta.ca

Denial of Service (DoS) attacks overwhelm online services, preventing legitimate users from accessing a service, often with impact on revenue or consumer trust. Approaches exist to filter network-level attacks, but application-level attacks are harder to detect at the firewall. Filtering at this level can be computationally expensive and difficult to scale, while still producing false positives that block legitimate users. We present a model-based adaptive architecture and algorithm for detecting DoS attacks at the web application level and mitigating them. Using a performance model to predict the impact of arriving requests, a decision engine adaptively generates rules for filtering traffic and sending suspicious traffic for further review, where the end user is given the opportunity to demonstrate they are a legitimate user. If no legitimate user responds to the challenge, the request is dropped. Experiments performed on a scalable implementation demonstrate effective mitigation of attacks launched using a real-world DoS attack tool.

STRATOS: A Cloud Broker
Przemyslaw Pawluk, Bradley Simmons, Mike Smit, and Marin Litoiu
(York University, Canada)
msmit@cs.ualberta.ca

This poster introduces a cloud broker service (STRATOS) which facilitates the deployment and runtime management of cloud application topologies using cloud elements/services sourced on the fly from multiple providers, based on requirements specified in higher level objectives. Its implementation and use is evaluated in a set of experiments.

UML Modeling and Analysis of Power Consumption for Wireless Sensor Networks (WSNs)
John K. Jacoub, Ramiro Liscano, Jeremy Bradbury, and Jared Fisher
(University of Ontario Institute of Technology, Canada)
John.Khalil@uoit.ca

Wireless Sensor Networks (WSNs) systems are deployed to monitor specific phenomena. The design of WSNs is prone to errors and debugging and is very challenging due to the complex interactions of software components in a sensor node. Moreover, WSNs systems have limited power sources which lead to the necessity of minimizing power consumption utilization during the design. This poster presents a set of software patterns that can be used as a basis for software design of a WSN. The UML is used to capture the hardware and the software components of a WSN system and this in turn is used for power consumption analysis of the WSN during the early stages of the development cycle. The WSN modelling patterns are justified by applying them to two types of WSN systems, a typical multi-hop field deployment and another non-typical WSN-RFID WSN that integrates RFID with sensor nodes in order to support authenticated point to point communication with a sensor node.

Framework for Distributed Policy-Based Management (DBPM) In Wireless Sensor Network to Support Autonomic Behavior
Nidal Qwasmi and Ramiro Liscano
(University of Ontario Institute of Technology, Canada)
Nidal.Qwasmi@uoit.ca

Wireless sensor networks (WSN) usually work in a heterogeneous environment, which makes a sensor’s node very difficult to detect, access and manage. Therefore, there is a need for autonomic behavior to overcome these environmental challenges. A general way of implementing autonomic behavior in distributed systems is through the use of policies. However the conventional policy frameworks are generally too heavy to execute in the sensor node. Thus the goal of our research is to create a framework for distributed policy-based management in WSNs. Our proposed framework is expected to extend the WSN management functionalities compared with conventional policy management system like Finger/Finger2; it also conceals the complexity of administrating policies operations from the users by streamline the processes; finally, it overcomes the flaw in the existing frameworks about policy execution orders in some cases where multi-policies are required to ensure consistency and persistence.

Advances in Mobile Health: Standardization, Security, and Semantic Analysis
Kamran Sartipi
(University of Ontario Institute of Technology, Canada)
Duane Bender
(Mohawk College, Canada)
Kamran.Sartipi@uoit.ca
duane.bender@mohawkcollege.ca

We present an overview of requirements for the new generation of mobile health technology as a fast-growing application domain with major impact on health and medical services for rural regions and homecare patients. This technology takes advantage of fast and sophisticated smart mobile devices, variety of cheap and wireless body sensors, more secure communication channels, easy access to large medical records, and robust decision making algorithms. Recently, the roadblock in integration of medical information systems, caused by complexity of HL7-v3 standards, is being removed by using a new RESTful approach. We briefly present the current research problems in this field, including real-time security of information communication, agent based and context-aware semantic analysis, and communication of extra-large diagnostic imaging. Our current research agenda will provide an experimental cloud infrastructure for mobile and service communication, located at Mohawk College MEDIC lab, with a close collaboration with Smart Software Systems Lab at UOIT.

Managing Long-Running DBMS Queries
Mastoureh Hassannezhad and Patrick Martin
(Queen's University, Canada)
mh@cs.queensu.ca

Long-running, complex queries, such as those found in Business Intelligence (BI) workloads, can have a negative impact on the performance of a database system. Query unpredictability can result from data skew, poorly-written SQL queries, badly optimized plans and even resource contention. Recognizing errant long-running queries and taking appropriate action can minimize the effects on the workload. We investigate the use of a Query Progress Indicator (PI) to assist in determining how best to handle the management of currently executing queries.

A Framework for Autonomic Workload Management in DBMSs
Mingyi Zhang, Patrick Martin, and Wendy Powley
(Queen’s University, Canada);
Paul Bird
(IBM Canada)
myzhang@cs.queensu.ca

In today’s data server environments, multiple types of workloads can be mixed and present in a system simultaneously. Workloads may have different levels of business importance and unique performance objectives. An autonomic workload management system dynamically controls the flow of the workloads to help the database management systems (DBMSs) achieve the performance objectives. In this poster, we present a framework and a prototype implementation for autonomic workload management in DBMSs. The framework provides the ability to meet performance objectives of workloads with multi-level business importance, and to protect database systems against performance failure in a workload mix data server environment. The prototype system is implemented on top of IBM® DB2® Workload Manager. A set of experiments is conducted using the prototype implementation on DB2 databases to illustrate the effectiveness of our approaches.

Querying WSDL Repositories with Grok
Douglas Martin, James R. Cordy, and Thomas R. Dean
(Queen's University, Canada)
douglashmartin@gmail.com

In this work, we present an approach for querying a WSDL (Web Service Description Language) repository using Grok, an engine with its own unique language that makes it possible to query a factbase of binary relations. Using TXL, we extract a set of facts from a WSDL repository, and use Grok to ask common questions about the web services in the repository.

Clone Detection of JavaScript-based Malware
Saruhan A. Karademir
(Queen’s University, Canada);
Sylvain P. Leblanc
(Royal Military College of Canada, Canada);
Thomas R. Dean
(Queen’s University, Canada)
7sak@queensu.ca

JavaScript-borne malware is a primary vector of attack in the modern web. Currently, there are signature-based detection mechanisms that can provide a defense against these attacks. However, these systems are not particularly flexible and small changes to the malware can hide them from these tools. Some malware is also obfuscated by in-source compression engines that extract the actual malicious code during execution-time. Our research addresses this challenge by building a PDF-based JavaScript malware detection pipeline using the NiCad Code Clone Detection tool. This clone-detection system provides increased flexibility by taking advantage of JavaScript’s inherent availability of source. Compression-based malware is dynamically detected and extracted for further detection using the V8 JavaScript execution engine. We discuss the flexibility and performance questions that rise from using clone-detection in lieu of traditional detection mechanisms.

Continuous Improvement of Regulations and Compliance by Complementing Business Intelligence Tools with Goal Analysis
Omar Badreddin
(University of Ottawa, Canada)
oobahy@gmail.com

The traditional governance model focuses entirely on compliance. Processes and activities are focused on enhancing compliance levels by ensuring that regulated entities strictly follow regulations. We propose to complement the traditional model with means to allow regulatory institutions to reason about the originating regulations as well. Business Intelligence tools provide excellent insights into compliance, but do a poorer job in providing insights about prescriptive regulations that do not follow a structure that is directly suitable for database systems. We propose the use of Goal modeling to provide insights about compliance levels of Regulations as well. Effectively, we broaden the traditional governance model to include regulations and legislation.

Will My Patch Make It? And When?
Yujuan Jiang and Bram Adams
(École Polytechnique de Montréal, Canada)
yujuan.jiang@polymtl.ca

The Linux kernel is maintained by thousands of volunteers who actively submit patches in the hope that their new feature or bug fix will make it into the next kernel release. However, not every patch makes it and of those that do, some patches require a lot more reviewing and testing than others. In order to support the volunteers in understanding which patches are worthwhile to pursue, we build models of the probability that a certain patch submission will be accepted and of the time it will take to get in. Our approach traces back patches all the way from their accepted version to the initial emails discussing them, and collects attributes from the emails and patches. Then, we build classification models to explore the relationship between the collected attributes and the outcome of the patch. We built and evaluated models on data of 4 years of kernel development.

Build System Migration in the Eclipse Ecosystem
Mathieu Bollen and Bram Adams
(École Polytechnique de Montréal, Canada)
mathieu.bollen@gmail.com

As the source code of a project evolves, the build system responsible for compiling it has to evolve as well, which can lead to a build system so complex that it takes too much time to maintain and starts to malfunction. As a seemingly "easy" way out, projects like KDE and the Linux kernel tend to migrate their build system towards more powerful build system technologies. However, such migrations involve great risks, challenges and costs, which are still largely undocumented. In order to uncover these risks, we are currently analyzing how sub-projects of the Eclipse project migrated recently toward Eclipse's new build system technology, Tycho. As a first step, we are comparing size and complexity measures before and after migration to learn whether or not the migration of the build system technology really improved the build system’s design.

Model Correctness Patterns - An Experience Report
Azzam Maraee, Mira Balaban, and Arnon Sturm
(Ben-Gurion University of the Negev, Israel)
sturm@bgu.ac.il

Models are the backbone of the emerging Model Driven Engineering approach, whose major theme is development of software via repeated model transformations. The quality of models used in such a process affects not only the final result, but also the development process itself. In order to achieve high quality models, developers must be equipped with the awareness to model design problems and the ability to identify and correct such models. In this work we observe the role of class diagram correctness patterns as an instrument for introducing awareness to modeling problems and for improving class diagram modeling. To support that notion a catalog of correctness and quality design (anti)-patterns for class diagrams was developed. The patterns in that catalog characterize problems, analyze their causes, and provide repairing advice. In addition, an experiment of using the anti-patterns for identifying correctness and quality problems in class diagram is described.

A Methodology for Integrating Security Policies within the Software Development Process
Jenny Abramov, Omer Anson, Michal Dahan, Peretz Shoval, and Arnon Sturm
(Ben-Gurion University of the Negev, Israel)
sturm@bgu.ac.il

Security in general and database in particular, are crucial for organizations. While functional requirements are defined in the early stages of the development process, non-functional requirements such as security tend to be neglected or dealt with only at the end of the development process. Various efforts have been made to address this problem; however, none of them provide a complete framework to guide, enforce and verify the correct design of security policies, and eventually generate code from that design. In this work we present a methodology that addresses these gaps and assists developers to design and implement secure databases that comply with the organizational security policies. The methodology is supported by a CASE tool. The use of the proposed methodology was evaluated in an empirical experiment and two case studies.