Challenge 2023

Sussex-Huawei Locomotion Challenge 2023

The Sussex-Huawei Locomotion Dataset [1-2] will be used in an activity recognition challenge with results to be presented at the HASCA Workshop at Ubicomp 2023.

This fifth edition of the challenge follows on our very successful 2018, 2019, 2020 and 2021 challenges, which saw the participation of more than 100 teams and 250 researchers [4-5]. The goal of this year edition is to recognize 8 modes of locomotion and transportation (activities) in a user-independent manner based on motion and GPS sensor data. This is different from the 2018-2020 Challenges that aimed at transportation mode recognition from motion sensors, and the 2021 Challenge that aimed at recognition from GPS and radio sensors. This webpage will point to the data for the training, validation and testing dataset. The participants will have to develop an algorithm pipeline that will process the sensor data, create models and output the recognized activities.

 

Prizes

  1. 800£
  2. 400£
  3. 200£

*Note: Prizes may increase subject to additional sponsors.

Deadlines

  • Registration via email: as soon as possible, but not later than 30.04.2023
  • Challenge duration: 20.04.2023 – 30.06.2023  25.06.2023
  • Submission deadline: 30.06.2023 25.06.2023
  • HASCA-SHL paper submission: 10.07.2023 30.06.2023
  • HASCA-SHL review notification: 21.07.2023 07.07.2023
  • HASCA-SHL camera ready submission: 27.07.2023 15.07.2023
  • UbiComp early-bird registration: 15.07.2023 
  • HASCA workshop: 8-12th October, 2023 in Cancún, Mexico.
  • Release of the ground-truth of the test data: TBD

Registration

Each team should send a registration email to shldataset.challenge@gmail.com as soon as possible but not later than 30.04.2023, stating the:

  • The name of the team
  • The names of the participants in the team
  • The organization/company (individuals are also encouraged)
  • The contact person with email address

HASCA Workshop

To be part of the final ranking, participants will be required to submit a detailed paper to the HASCA workshop. The paper should contain technical description of the processing pipeline, the algorithms and the results achieved during the development/training phase. The submissions must follow the HASCA format, but with a page limit between 3 and 6 pages.

Submission of predictions on the test dataset

The participants should submit a plain text prediction file (e.g. “teamName_predictions.txt) for the testing dataset, containing the time stamps and the predicted labels. Specifically, the submitted file should contain a matrix of size 46385816 lines x 2 columns (the first column corresponds to the time stamps, and the second column corresponds to the prediction). An example of submission is available. The participants’ predictions should be submitted online by sending an email to shldataset.challenge@gmail.com, in which there should be a link to the predictions file, using services such as Dropbox, Google Drive, etc. In case the participants cannot provide links using some file sharing service, they should contact the organizers via email shldataset.challenge@gmail.com, which will provide an alternate way to send the data. To be part of the final ranking, participants will be required to publish a detailed paper in the proceedings of the HASCA workshop. The date for the paper submission is 30.06.2023. All the papers must be formatted using the ACM SIGCHI Master Article template with 2 columns. The template is available at TEMPLATES ISWC/UBICOMP2023. Submissions do not need to be anonymous. Submission is electronic, using precision submission system. The submission site is open at https://new.precisionconference.com/submissions (select SIGCHI / UbiComp 2023 / UbiComp 2023 Workshop – HASCA-SHL and push Go button). See the image below.

2023sub

A single submission is allowed per team. The same person cannot be in multiple teams, except if that person is a supervisor. The number of supervisors is limited to 3 per team.

Dataset format

The data is divided into three parts: train, validate and test. The data comprises of 59 days of training data, 6 days of validation data and 28 days of test data. The training data is collected by User1. The validation data is a mixture of User2 and User3. The testing data is a mixture of User2 and User3.

Note: The motion sensors (accelerometer, gyroscope, magnetometer) are synchronously sampled at sampling rate 100 Hz. The label file use the same sampling time stamps as the motion sensors. The radio sensors (GPS and location) are asynchronously sampled, with a sampling rate roughly at 1 Hz, but is time-varying for each sensor. Depending on the condition of the GPS satellite, it may happen that one sensor receive no signal at a certain interval and thus no data recorded. The time data at the first column of each sensor contains the epoch time in ms. 

The training data contains the raw sensors data from one user (user 1) and four phone locations (bag, hips, torso, hand).  It also includes the activity labels (class label). The train data contains four sub-directories (Bag, Hips, Torso, Hand) with the following files in each sub-directory.

Training/Hand
Filename Size Format
Column Content
Acc.txt 98052438 x 4 1 Epoch time [ms]
2 Acc_x
3 Acc_y
4 Acc_z
Gyr.txt 98052438 x 4 1 Epoch time [ms]
2 Gyr_x
3 Gyr_y
4 Gyr_z
Mag.txt 98052438 x 4 1 Epoch time [ms]
2 Mag_x
3 Mag_y
4 Mag_z
Location.txt 1088500  x n_var 1 Epoch time [ms]
2 Ignore
3 Ignore
4 Accuracy of this location [m]
5 Latitude [degrees]
6 Longitude [degrees]
7 Altitude [m]
GPS.txt 1421690    x n_var 1 Epoch time [ms]
2 Ignore
3 Ignore
4+ Variable number of entries for GPS data. If no satellite is visible the 4th column is 0. Otherwise, for each satellite visible 4 columns are added to the data file and an additional last column indicates the number of satellites. Each of the 4 columns contain in order: ID, SNR, Azimuth [degrees], Elevation [degrees] For example:1489485950011 161777247369 10889909374 0 indicates no satellite visible. 1489485951014 162780045286 10889909374 7 12.0 56.0 32.0 1 indicates one satellite visible; satellite 7 with SNR=12, Azimuth=56 and elevation=32. 1489485962025 173791715076 10889909374 7 15.0 56.0 32.0 30 12.0 82.0 70.0 2 indicates two satellite visible; satellite 7 and 30.
Label.txt 98052438 x 2 1 Epoch time [ms]
2 Label: Still=1, Walking=2, Run=3, Bike=4, Car=5, Bus=6, Train=7, Subway=8

 

Training/Hips
Filename Size Format
Acc.txt 98052438 x 4 The same as the Training/Hand data.
Gyr.txt 98052438 x 4 The same as the Training/Hand data.
Mag.txt 98052438 x 4 The same as the Training/Hand data.
Location.txt 911109 x n_var The same as the Training/Hand data.
GPS.txt 1322749 x n_var The same as the Training/Hand data.
Label.txt 98052438 x 2 The same as the Training/Hand data.

 

Training/Torso
Filename Size Format
Acc.txt 98052438 x 4 The same as the Training/Hand data.
Gyr.txt 98052438 x 4 The same as the Training/Hand data.
Mag.txt 98052438 x 4 The same as the Training/Hand data.
Location.txt 918868 x n_var The same as the Training/Hand data.
GPS.txt 1305061 x n_var The same as the Training/Hand data.
Label.txt 98052438 x 2 The same as the Training/Hand data.

 

Training/Bag
Filename Size Format
Acc.txt 98052438 x 4 The same as the Training/Hand data.
Gyr.txt 98052438 x 4 The same as the Training/Hand data.
Mag.txt 98052438 x 4 The same as the Training/Hand data.
Location.txt 940755 x n_var The same as the Training/Hand data.
GPS.txt 1187537 x n_var The same as the Training/Hand data.
Label.txt 98052438 x 2 The same as the Training/Hand data.

 

The validation data contains the raw sensors data from the other two users (mixing user 2 and use 3) and four phone locations (bag, hips, torso, hand), with the same files as the training dataset. 

Validation/Hand
Filename Size Format
Acc.txt 14395941 x 4 The same as the Training/Hand data.
Gyr.txt 14395941 x 4 The same as the Training/Hand data.
Mag.txt 14395941 x 4 The same as the Training/Hand data.
Location.txt 138471 x n_var The same as the Training/Hand data.
GPS.txt 180377 x n_var The same as the Training/Hand data.
Label.txt 14395941 x 2 The same as the Training/Hand data.

 

Validation/Hips
Filename Size Format
Acc.txt 14395941 x 4 The same as the Training/Hand data.
Gyr.txt 14395941 x 4 The same as the Training/Hand data.
Mag.txt 14395941 x 4 The same as the Training/Hand data.
Location.txt 101524 x n_var The same as the Training/Hand data.
GPS.txt 157348 x n_var The same as the Training/Hand data.
Label.txt 14395941 x 2 The same as the Training/Hand data.

 

Validation/Torso
Filename Size Format
Acc.txt 14395941 x 4 The same as the Training/Hand data.
Gyr.txt 14395941 x 4 The same as the Training/Hand data.
Mag.txt 14395941 x 4 The same as the Training/Hand data.
Location.txt 108323 x n_var The same as the Training/Hand data.
GPS.txt 157851 x n_var The same as the Training/Hand data.
Label.txt 14395941 x 2 The same as the Training/Hand data.

 

Validation/Bag
Filename Size Format
Acc.txt 14395941 x 4 The same as the Training/Hand data.
Gyr.txt 14395941 x 4 The same as the Training/Hand data.
Mag.txt 14395941 x 4 The same as the Training/Hand data.
Location.txt 119251 x n_var The same as the Training/Hand data.
GPS.txt 163442 x n_var The same as the Training/Hand data.
Label.txt 14395941 x 2 The same as the Training/Hand data.

 

The testing data comprises contains the raw sensors data from the other two users (mixing user 2 and use 3) and one phone location (unknown to the participants), with the same files as the train dataset but no class label. The format of the data is shown in the table below.

Testing
Filename Size Format
Acc.txt 46385816 x n_var The same as the Training/Hand data.
Gyr.txt 46385816 x n_var The same as the Training/Hand data.
Mag.txt 46385816 x n_var The same as the Training/Hand data.
Location.txt 450932 x n_var The same as the Training/Hand data.
GPS.txt 600576 x n_var The same as the Training/Hand data.
Label_idx.txt 46385816 x 1 The timestamps for which predict the transportation mode.

Downloads

Data

Submission example

This is an example of how the submission classification result should look like: (download).

Ground truth of the test data (released on xx/xx/xxxx)

Download the test labels of the test set.

  •  

Rules

Some of the main rules are listed below. The detailed rules are contained in the following document.

  • Eligibility
    • You do not work in or collaborate with the SHL project (http://www.shl-dataset.org/);
    • If you submit an entry, but are not qualified to enter the contest, this entry is voluntary. The organizers reserve the right to evaluate it for scientific purposes. If you are not qualified to submit a contest entry and still choose to submit one, under no circumstances will such entries qualify for sponsored prizes.
  • Entry
    • Registration (see above): as soon as possible but not later than 30.04.2023.
    • Challenge: Participants will submit prediction results on test data.
    • Workshop paper: To be part of the final ranking, participants will be required to publish a detailed paper in the proceedings of the HASCA workshop (http://hasca2023.hasc.jp/); The dates will be set during the competition.
    • Submission: The participants’ predictions should be submitted online by sending an email to shldataset.challenge@gmail.com, in which there should be a link to the predictions file, using services such as Dropbox, Google Drive, etc. In case the participants cannot provide link using some file sharing service, they should contact the organizers via email shldataset.challenge@gmail.com, which will provide an alternate way to send the data.
    • A single submission is allowed per team. The same person cannot be in multiple teams, except if that person is a supervisor. The number of supervisors is limited to 3 per team.

Q&A

Contact

All inquiries should be directed to: shldataset.challenge@gmail.com

Organizers

  • Dr. Lin Wang, Queen Mary University of London (UK)
  • Dr. Daniel Roggen
  • Dr. Hristijan Gjoreski, Ss. Cyril and Methodius University (MK)
  • Dr. Mathias Ciliberto, University of Sussex (UK)
  • Dr. Kazuya Murao, Ritsumeikan University (JP)
  • Dr. Tsuyoshi Okita, Kyushu Institute of Technology (JP)
  • Dr. Paula Lago, Concordia University in Montreal (CA)

References

[1] H. Gjoreski, M. Ciliberto, L. Wang, F.J.O. Morales, S. Mekki, S. Valentin, and D. Roggen, “The University of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices,” IEEE Access 6 (2018): 42592-42604. [DATASET INTRODUCTION]

[2] L. Wang, H. Gjoreski, M. Ciliberto, S. Mekki, S. Valentin, and D. Roggen, “Enabling reproducible research in sensor-based transportation mode recognition with the Sussex-Huawei dataset,” IEEE Access 7 (2019): 10870-10891. [GPS BASELINE + DATASET ANALYSIS ]

[3] L. Wang, H. Gjoreski, M. Ciliberto, S. Mekki, S. Valentin, and D. Roggen, “Benchmarking the SHL recognition challenge with classical and deep-learning pipelines,” in Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 1626-1635, 2018. [BASELINE FOR MOTION SENSORS]

[4] L. Wang, H. Gjoreski, M Ciliberto, P. Lago, K. Murao, T. Okita, D. Roggen, “Three-Year review of the 2018–2020 SHL challenge on transportation and locomotion mode recognition from mobile sensors,” Frontiers in Computer Science, 3(713719): 1-24, Sep. 2021. [SHL 2018-2020 SUMMARY]

[5] L. Wang, M. Ciliberto, H. Gjoreski, P. Lago, K. Murao, T. Okita, and D. Roggen, “Locomotion and transportation mode recognition from GPS and radio signals: Summary of SHL challenge 2021,” Adjunct Proc. 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proc. 2021 ACM International Symposium on Wearable Computers (UbiComp’ 21), 412-422, Virtual Event, 2021. [SHL 2021 SUMMARY]

[6] S. Richoz, L. Wang, P. Birch, and D. Roggen, “Transportation mode recognition fusing wearable motion, sound and vision sensors,” IEEE Sensors Journal, 20(16): 9314-9328, Aug. 2020. [SENSOR FUSION]

 

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

Competition Results

Summary paper:

Summary of SHL Challenge 2023: Recognizing Locomotion and Transportation Mode from GPS and Motion Sensors  
Lin Wang, Hristijan Gjoreski, Mathias Ciliberto, Paula Lago, Kazuya Murao, Tsuyoshi Okita, Daniel Roggen.
[PDF]

 

Winning teams: 

  • HELP (95.99%)  
  • HYU-CSE (93.68%)  
  • Juliet (92.69%)