The DEBS Grand Challenge is a series of competitions, that started in 2010, in which both academics and professionals compete with the goal of building faster and more accurate distributed and event-based system. Every year, the DEBS Grand Challenge participants have a chance to explore a new data set and a new problem and can compare their results based on the common evaluation criteria. The winners of the challenge are announced during the conference. The 2020 DEBS Grand Challenge focuses on Non-Intrusive Load Monitoring (NILM). The goal of the challenge is to detect when appliances contributing to an aggregated stream of voltage and current readings from a smart meter are switched on or off. NILM is leveraged in many contexts, ranging from monitoring of energy consumption to home automation. The 2020 Grand Challenge evaluation platform is based on the HOBBIT project.
Participants of the challenge compete for two awards: (1) the performance award and (2) the audience award. The winner of the performance award will be determined through the automated evaluation platform, according to the evaluation criteria specified below. Evaluation criteria measure the speed and correctness of submitted solutions. The winning team of the performance award will receive 1000 USD as prize money. The winner of the audience award will be determined amongst the finalists who present in the Grand Challenge session of the DEBS conference. In this session, the audience will be asked to vote for the solution with the most interesting concepts. The solution with the highest number of votes wins. The intention of the audience award is to highlight the qualities of the solutions that are not tied to performance. Specifically, the audience and challenge participants are encouraged to consider the following aspects:
There are two ways for teams to become finalists and get a presentation slot in the Grand Challenge session during the DEBS Conference: (1) up to two teams with the best performance (according to the final evaluation) will be nominated; (2) the Grand Challenge Program Committee will review submitted papers for each solution and nominate up to two teams with the most novel concepts. All submissions of sufficient quality that do not make it to the finals will get a chance to be presented at the DEBS conference as posters. The quality of the submissions will be determined based on the review process performed by the DEBS Grand Challenge Program Committee.
The data provided for the challenge consists of energy measurements from a smart meter. The schema of the input tuple is thus < i,v,c >, where attributes i, v and c represent the tuple sequence id, the voltage and the current, respectively. Participants get access to a sample data set after they register in HotCRP. The links will be shared by the GC chairs upon registration. From the 14th of January, the full dataset is available for participants that register to the challenge via HotCRP (https://debs2020gc.hotcrp.com/).
The first query aims at detecting devices that are turned on or off in a stream of voltage and current readings from a smart meter, resembling the aggregated energy consumption in an office building. The processing steps are described in the following.
For each 1000th input tuple (i.e., each time window W1 is full), the query should produce a result. Such result specifies whether an event has been detected for such window and the sequence number of the window W1 for which the event is detected. The schema of the output stream is thus: < s,d,event_s >, where s is the window W1 input tuple sequence id to which this output tuple refers to, d is a boolean attribute that specifies whether an event is detected or not and event_s is the window W1 sequence id of the detected event (if any).
The second query is a variation of Query 1. More concretely, this query is expected to process an input stream that can contain both late arrivals as well as missing tuples. Since the semantics of Query 2 are equal to those of Query 1, participants are expected to provide a solution that is able to trade-off timeliness and accuracy of results. Upon reception of an input tuple, the proposed solution can produce an output (based only on the data observed so far) or decide to postpone the output after late arriving input tuples are received. In this second case, the solution can wait for possibly late arrivals to provide an output that more accurately identifies the timestamp at which an event is detected (if any). In order to differentiate between missing tuples and late arrivals, notice that late tuples are bound to 20 windows (20000 time units). The results produced by the submitted solutions will be compared with those produced by a baseline that observes all input data in order (i.e., with no missing tuple nor late arrivals) and scored accordingly to the process described in the remainder. Please notice that each output tuple should be produced no more than once.
Evaluation of Query 1 addresses two aspects: (1) correctness of results and (2) processing speed. The first is taken into account by comparing the results of a proposed solution with that of our baseline. Only solutions that produce correct results (i.e., that produce the same set of output tuples produced by our baseline and in the same order) are considered as valid. The second aspect is captured with multiple measures, the total runtime (rank_0) and the latency (rank_1). The specifics of the ranking for the processing speed and quality of results are defined as follows. The total runtime (rank_0) is the time span between the sending of the first input tuple and the reception of the result for the last input tuple. The lower the total runtime measure the higher the position in the ranking. The latency (rank_1) is measured as the average time span between retrieving an input tuple and providing the corresponding output tuple. The lower the latency the higher the position in the ranking.
Evaluation of Query 2 addresses two aspects: (1) timeliness of produced results (rank_3) and (2) accuracy (rank_4). More concretely, being:
The overall final rank is calculated as the sum of all rankings for both queries. The solution with the lowest overall final rank wins the performance award. The paper review score will be used in case of ties.
For local evaluation/testing you can download a docker compose based setup here. Questions regarding the data set and the evaluation platform should be posted in the issue tracker of the aforementioned git repository. A baseline implementation for both queries is made available via the provided docker image.
Participation in the DEBS 2020 Grand Challenge consists of three steps: (1) registration, (2) iterative solution submission, and (3) paper submission. The first step is to pre-register your submission in the HotCRP Grand Challenge track, at the following link: https://debs2020gc.hotcrp.com/. Pre-registration in HotCRP is necessary to (1) state the intent of a team to participate in the Grand Challenge and to establish a communication channel with the Grand Challenge organizers and (2) get access to the challenge input data. At this step, it is sufficient to start a new submission, register the members of the team as authors and submit an interims title for your work. Participants are not requested to submit a PDF at this stage. Solutions to the challenge, once developed, must be submitted to the evaluation platform in order to get it benchmarked in the challenge. The evaluation platform provides detailed feedback on performance and allows to update the solution in an iterative process. A solution can be continuously improved until the challenge closing date. Evaluation results of the last submitted solution will be used for the final performance ranking. The last step is to upload a short paper (minimum 2 pages, maximum 6 pages) describing the final solution to the HotCRP system. All papers will be reviewed by the DEBS Grand Challenge Program Committee to assess the merit and originality of submitted solutions. All solutions of sufficient quality will be presented during the poster session at the DEBS 2020 conference.
|Tutorial Proposal Submission|
|Abstract Submission for Research Track|
|Research and Industry Paper Submission|
|Grand Challenge Solution Submission|
|Author Notification Industry Track||May 3rd, 2020|
|Author Notification Research Track|
|Doctoral Symposium Submission|
|Poster & Demo Submission|
|Camera Ready for All Tracks|
|Conference||July 13th – 17th, 2020|