Challenge 2019

Sussex-Huawei Locomotion Challenge 2019

The Sussex-Huawei Locomotion Dataset will be used in an activity recognition challenge with results to be presented at the HASCA Workshop at Ubicomp 2019.

This follows on our very successful 2018 challenge, which saw the participation of 22 teams.

This year’s edition uses previously unreleased data. The goal of this machine learning/data science challenge is to recognize 8 modes of locomotion and transportation (activities) from the inertial sensor data of a smartphone in a mobile-phone placement independent manner. More precisely, the goal is to recognize the user activity from data from the Hand phone, but training the model using data from smartphones on other different positions. A little validation data from the Hand phones is also provided.

The dataset used for this challenge comprises 59 days of training data, 3 days of validation data, and 20 days of test data.

The participants will have to develop an algorithm pipeline that will process the sensor data, create models and output the recognized activities.

Prizes

  1. 800£
  2. 400£
  3. 200£

*Note: Prizes may increase subject to additional sponsors.

Deadlines

  • Registration via email: as soon as possible, but not later than 01.06.2019 07.06.2019
  • Challenge duration: 15.05.2019 – 30.06.2019
  • Submission deadline: 30.06.2019
  • HASCA-SHL paper submission: 30.06.2019
  • HASCA-SHL review notification: 05.07.2019
  • HASCA-SHL camera ready submission: 08.07.2019
  • HASCA-SHL Workshop presentation at ISWC/UbiComp in London: 10.09.2019
  • Release of the ground-truth of the test data: 16.09.2019

Registration

Each team should send a registration email to shldataset.challenge@gmail.com as soon as possible but not later than 01.06.2019, stating the:

  • The name of the team
  • The names of the participants in the team
  • The organization/company (individuals are also encouraged)
  • The contact person with his/hers email address

HASCA Workshop

To be part of the final ranking, participants will be required to submit a detailed paper to the HASCA workshop. The paper should contain technical description of the processing pipeline, the algorithms and the results achieved during the development/training phase. The paper submission date is 30.06.2019. The submissions must follow the HASCA format, but with a page limit between 3 and 6 pages.

Submission of predictions on the test dataset

The participants should submit a plain text predictions file (e.g. “teamName_predictions.txt”) for the testing dataset, corresponding to the sensor data in the testing dataset. The structure of the file should be the same as the label file in the training dataset. This means that the submitted file should contain a matrix of size 55811 lines x 500 columns corresponding to each sample in the testing dataset.

The participants should use the following format for the predictions file: “teamName_predictions.txt”. An example of submission is available here.

The participants’ predictions should be submitted online by sending an email to shldataset.challenge@gmail.com, in which there should be a link to the predictions file, using services such as Dropbox, Google Drive, etc. In case the participants cannot provide link using some file sharing service, they should contact the organizers via email shldataset.challenge@gmail.com, which will provide an alternate way to send the data.

To be part of the final ranking, participants will be required to publish a detailed paper in the proceedings of the HASCA workshop. The date for the paper submission is 30.06.2018. All the papers must be formatted as “ACM SIGCHI Extended Abstracts format” (landscape). Submissions do not need to be anonymous.

Submission is electronic, using precision submission system. The submission site is open at https://new.precisionconference.com/user/login (select SIGCHI / UbiComp 2019 / UbiComp 2019 Workshop – HASCA-SHL and push Go button). See the image below.

A single submission is allowed per team. The same person cannot be in multiple teams, except if that person is a supervisor. The number of supervisors is limited to 3 per team.

Dataset and format

The data is divided into three parts: train, validate and test. The data comprises of 59 days of training data, 3 days of validation data and 20 days of test data. The train, validation and test data was generated by segmenting the whole data with a non-overlap sliding window of 5 seconds.

The train data contains the raw sensors data from one user (user 1) and three phone locations (bag, hips, torso). It also includes the activity labels (class label). The train data contains three sub-directories (Bag, Hips, Torso) with the following files in each sub-directory:

  • Acc_*.txt (with * being x, y, or z): acceleration
  • Gra_*.txt: gravity
  • Gyr_*.txt: rate of turn
  • LAcc_*.txt: linear acceleration
  • Mag_*.txt: magnetic field
  • Ori_*.txt (with * being w, x, y, z): orientation of the device in quaternions
  • Pressure.txt: atmospheric pressure
  • Label.txt: activity classes.

Each file contains 196072 lines x 500 columns, corresponding to 196072 frames each containing 500 samples (5 seconds at the sampling rate 100 Hz). The frames for the train data are consecutive in time.

The test data comprises only the data of the Hands phone (file format similar to the train data and validation data) and no class label. This is the data on which ML predictions must be made. The files contains 55811 lines x 500 columns (5 seconds at the sampling rate 100 Hz). In order to challenge the real time performance of the classification, the frames are shuffled, i.e. two successive frames in the file are likely not consecutive in time.

The validation data contains four sub-directories (Bag, Hips, Torso and Hand), with the same files as the train dataset. Each file in the validation data contains 12177 lines x 500 columns, corresponding to 12177 frames each containing 500 samples (5 seconds at the sampling rate 100 Hz). The frames for the validation data are shuffled similarly to the frames of the testing data. The validation data is extracted from the already released preview of the SHL dataset. This validation data comprises some annotated data of the Hand phone, which can be used to train the phone-position independent activity recognition model.

Download train data

Download validation data

Download test data

Ground truth of the test data (released on 16/09/2019)

Download the test labels, the permutations applied to the raw data of the validation and test set, and the recording days which were used to make up the train, validation and test sets.

Rules

Some of the main rules are listed below. The detailed rules are contained in the following document.

  • Eligibility
    • You do not work in or collaborate with the SHL project (http://www.shl-dataset.org/);
    • If you submit an entry, but are not qualified to enter the contest, this entry is voluntary. The organizers reserve the right to evaluate it for scientific purposes. If you are not qualified to submit a contest entry and still choose to submit one, under no circumstances will such entries qualify for sponsored prizes.
  • Entry
    • Registration (see above): as soon as possible but not later than 01.06.2019.
    • Challenge: Participants will submit prediction results on test data.
    • Workshop paper: To be part of the final ranking, participants will be required to publish a detailed paper in the proceedings of the HASCA workshop (http://hasca2019.hasc.jp/); The dates will be set during the competition.
    • Submission: The participants’ predictions should be submitted online by sending an email to shldataset.challenge@gmail.com, in which there should be a link to the predictions file, using services such as Dropbox, Google Drive, etc. In case the participants cannot provide link using some file sharing service, they should contact the organizers via email shldataset.challenge@gmail.com, which will provide an alternate way to send the data.
    • A single submission is allowed per team. The same person cannot be in multiple teams, except if that person is a supervisor. The number of supervisors is limited to 3 per team.

Contact

All inquiries should be directed to: shldataset.challenge@gmail.com

Organizers

  • Dr. Hristijan Gjoreski, University of Sussex (UK) & Ss. Cyril and Methodius University (MK)
  • Dr. Lin Wang, Queen Mary University of London (UK)
  • Dr. Daniel Roggen, University of Sussex (UK)
  • Dr. Kazuya Murao, Ritsumeikan University (JP)
  • Dr. Tsuyoshi Okita, Kyushu Institute of Technology (JP)
  • Mathias Ciliberto, University of Sussex (UK)
  • Dr. Paula Lago, Kyushu Institute of Technology  (JP)

 

References

[1] H. Gjoreski, M. Ciliberto, L. Wang, F.J.O. Morales, S. Mekki, S. Valentin, and D. Roggen, “The university of sussex-huawei locomotion and transportation dataset for multimodal analytics with mobile devices,” IEEE Access 6 (2018): 42592-42604. [DATASET INTRODUCTION]

[2] L. Wang, H. Gjoreski, M. Ciliberto, S. Mekki, S. Valentin, and D. Roggen, “Enabling reproducible research in sensor-based transportation mode recognition with the Sussex-Huawei dataset,” IEEE Access 7 (2019): 10870-10891. [MOTION SENSOR DATA ANALYSIS OF THE DATASET]

[3] L. Wang, H. Gjoreski, K. Murao, T.  Okita, and Daniel Roggen, “Summary of the sussex-huawei locomotion-transportation recognition challenge,” in Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 1521-1530, 2018. [SHL 2018 SUMMARY]

[4] L. Wang, H. Gjoreski, M. Ciliberto, S. Mekki, S. Valentin, and D. Roggen, “Benchmarking the SHL recognition challenge with classical and deep-learning pipelines,” in Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 1626-1635, 2018. [SHL 2018 BENCHMARK]

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

Competition Results

Winning teams:

1. JSI-First: 78.42%

1&2

2. Yonsei-MCML: 75.88%

1&2

3. We_Can_Fly: 70.30%

1&2