Overview

The proliferation of smart devices, mobile applications, and IoT systems in our daily lives has created significant opportunities for a more efficient, productive, healthy, and sustainable society centered on short and long-term human needs. To translate those opportunities into reality, we have seen increasing research attention from a number of disciplines. As human activity, behavior, and user experience become important factors in smart services and mobile applications, the research will bring a new focus on human-centered sensing, networking, and intelligent systems. We also focus on the development and practical application of novel sensing technologies within human-centric multi-device systems, incorporating both mobile and fixed infrastructure. Our attention includes mobile devices, including smartphones and wearable technology such as smartwatches, earbuds, and rings, as well as a combination of stationary infrastructure like cameras, WiFi, RFID readers, lighting systems, and base stations. The workshop explores the challenges and opportunities in a variety of sensing techniques ranging from sensing using wireless signals to sensor-based technologies (temperature sensors, accelerometers, PPG, EEG, pH sensors) or a combination thereof, discussing their roles in applications such as health monitoring, human behaviour analytics, precise localization, and UI/UX for multi-device systems.

Keynotes

Wearable AI, WearTel, and Mersivity: Technology as a vessel to interface us to each other and to our surroundings/environment


Bio: Steve Mann, PhD (MIT '97), P.Eng. (Ontario), Fellow of the IEEE
  • Full Professor, University of Toronto
  • Visiting Full Professor, Stanford University, Department of Electrical Engineering, Room 216, 350 Serra Mall, Stanford, CA 94305
  • Winner of the 2025 IEEE Consumer Electronics Award
  • Chair of Silicon Valley Innovation & Entrepreneurship Forum (SVIEF)
  • Founding Member of IEEE Council on Extended Intelligence
  • Founder of the Water-Human-Computer Interface initiative: the new field of WaterHCI
  • Marquis Who's Who 2018 Albert Nelson Marquis Lifetime Achievement Award
  • Invented wearable computing in his childhood, brought this invention to MIT to found the MIT wearable computing project, and "persisted in his vision and ended up founding a new discipline." -- Nicholas Negroponte, MIT Media Lab Director, 1997.
  • Invented, designed, and built the world's first smartglasses (smart eyeglass) for computer vision.
  • Inventor of the world's first contact lens display, and the first implantable eye camera.
  • Invented, designed, and built the world's first smartwatch in 1998 (patent filed 1999, featured on cover of Linux Journal July 2000) which he presented at IEEE ISSCC 2000 where he was named "The father of the wearable computer".
  • Inventor of HDR (High Dynamic Range) imaging, used in more than 2 billion smartphones.

Abstract

Wearable AI is an example of Mersivity. Mersivity regards technology as a vessel that interfaces us to each other and to our surroundings (environment). Indeed, the word "cyborg" (cybernetic organism) originates from the ancient Greek word "kybernetikos", referring to the skill of a helmsman. In the 1960s and 1970s, I created WearTel™️, a wearable telephone which I also envisioned as a wearable camera, wearable computer, etc., combining calculations, computation, and communication. At the time, computers were keeping us imprisoned in offices, whereas I envisioned computation as technology that becomes part of us wherever we want to go. Today we still have computers that imprison us, as we're hunched over, looking at a screen, or immersed in a virtual reality that we can't immerse, i.e., you can't go for a walk or a hike or a swim with it. I believe that if we're going to be surrounded by immersive technology, it should also be exmersive, i.e., immersible/submersible, or at the very least it should connect us with our surroundings -- i.e., it should be "mersive". This requirement gives rise to six fundamental signal flow paths, as outlined in the figure/illustration.

Mersivity illustration


Learning from Shitty Data: Dealing with Behavior Monitoring and Learning in Real-world Deployments


Bio: Pei Zhang is an Associate Professor in Department of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. He received his bachelor's degree from California Institute of Technology in 2002, and his Ph.D. degree in Electrical Engineering from Princeton University in 2008. His early work ZebraNet is considered one of the seminal works in sensor networks, for which he received the SenSys Test-of-Time Award in 2017. His recent work focuses on Cyber-Physical systems that utilizes the physical properties of vehicles and structures to discover surrounding physical information. His work combines machine learning-based data models, physics-based models, as well as heuristic models to improve learning using small amount of labeled sensor data. His work is applied to field of medicine, farming, smart retail, and is part of multiple startups. His work has been featured in popular media including CNN, CBS, NBC, Science Channel, Discovery Channel, Scientific American, etc. In addition, he has received various best paper awards, the NSF CAREER award (2012), SenSys Test of Time Award (2017), Google faculty award (2013, 2016), and was a member of the Department of Defense Computer Science Studies Panel.

Abstract

This talk introduces physics-aided approaches to improve learning in cyber-physical systems (CPS). Learning has become a useful tool for data-rich problems. However, its use in CPS has been limited because of its need for a large amount of well-labeled data for each application and deployment. This is especially challenging and often impossible due to the high number of variables that can affect data distribution in CPS (e.g., weather, time, persons, etc.). This talk introduces combinational techniques that improves physical models and hardware characteristics to enable learning in CPS with “small data”. Specifically we incorporate physical characteristics to guide learning, and transfer data from other domains using the physical understanding. This talk illustrates these approaches through our deployment experiences in real world settings in sports stadiums and farms.




Program

Workshop Day: November 4, 2024

  • 08:00 - 9:00

    Registration

  • 9:00 - 9:10

    Welcome

  • 9:10 - 10:10

    Keynote Speech

    • Title: Wearable AI, WearTel, and Mersivity: Technology as a vessel to interface us to each other and to our surroundings/environment
      Speaker: Steve Mann
  • 10:10 - 10:30

    Break

  • 10:30 - 11:30

    Session 1: Human-centred Wearable and Sensing Technologies

  • 15 min each, including Q&A
    • Extended Reality Waterball for Spinal Rehabilitation
      Steve Mann (MannLab Canada Inc.), Aydin Hosseingholizadeh (MannLab Canada Inc.), Nishant Kumar (MannLab Canada Inc.), Aoran Jiao (MannLab Canada Inc.), Calum Leaver-Preyra (MannLab Canada Inc.)
    • Enhancing Parkinson’s Disease Management through Automatic Personalized Assistive Systems
      Nasimuddin Ahmed (TCS Research), Aniruddha Sinha (TCS Research), Avik Ghose (TCS Research)
    • Taming the Variability of Soft Sensors
      Chien-Ti Hsiao (National Taiwan University), Tzu-Chin Ho (National Taiwan University), Hsuan-Ying Liu (National Taiwan University), Yan-Chi Lu (National Taiwan University), Kate Ching-Ju Lin (National Yang Ming Chiao Tung University), Ling-Jyh Chen (Academia Sinica, Taiwan), Polly Huang (National Taiwan University)
    • Integrating Traditional Japanese Nishijin Weaving Techniques with Modern Electronics
      Norihisa Segawa (Kyoto Sangyo University), Shuo Zhou (Kyoto Sangyo University), Masato Yazawa (Mathematical Assist Design Laboratory), Kaori Ueda (Kyoto Saga University of Arts)
  • 12:00 - 13:30

    Lunch

  • 13:30 - 14:30

    Keynote Speech

    • Title: Learning from Shitty Data: Dealing with Behavior Monitoring and Learning in Real-world Deployments
      Speaker:Pei Zhang
  • 14:30 - 15:15

    Session 2: Sensor Fusion and Embedded Systems

  • 15 min each, including Q&A
    • Underwater Ranging with a Single Smartphone
      Liu Yang (Zhejiang University), Zhi Wang (Zhejiang University)
    • Hierarchical Demand Based Resource Allocation for On-Device Inference
      Ashok Samraj Thangarajan (Nokia Bell Labs), Fahim Kawsar (Nokia Bell Labs), Alessandro Montanari (Nokia Bell Labs)
    • Energy Characterization of Tiny AI Accelerator-Equipped Microcontrollers
      Yushan Huang (Imperial College London), Taesik Gong (UNIST), SiYoung Jang (Nokia Bell Labs), Fahim Kawsar (Nokia Bell Labs & University of Glasgow), Chulhong Min (Nokia Bell Labs)
  • 15:15 - 15:30

    Closing Remarks and Best Paper Award announcement

Call for Papers

The HumanSys workshop is intended to bring together researchers, developers, and practitioners in related fields from academia, industry, and service providers, to share ideas and experiences related to human-centered technologies and applications. Both visionary and working-in-progress papers are encouraged. To that end, papers are solicited from all related areas involving human-centered sensing, networking, and intelligent systems, including, but not limited to the following topics:

  • Health, activity, gesture, and behavior monitoring and/or data analytics
  • Human-centered AI models
  • Hardware/software for human-centered applications, e.g., computer vision, VR/AR, wearables, mobile platforms, vibration/acoustics, radio-frequency sensing
  • Human-environment, human-sensor, human-AI interactions
  • Trustworthy AI and intelligent systems for user needs (e.g., privacy and security issues)
  • Human-centered design, e.g., user interface for mobile and embedded systems
  • Urban mobility and transportation systems
  • Location-based services and systems, e.g. localization, navigation, and tracking
  • Predictive control in human-intense mobile systems
  • Performance evaluation and deployment experience of human-centered systems
  • Communications and networking among multiple devices and sensors for human-centric applications
  • Data management and processing in multi-device sensing systems
  • Coordination and collaboration in multi-device systems
  • Embedded AI and tiny machine learning solutions in multi-device sensing systems
  • AI-driven sensing and computing in multi-device sensing systems
  • Techniques and algorithms for deriving insights from existing sensors Novel UI, UX, and human-centric applications of multi-device sensing systems

Submission

Submitted papers must be unpublished and must not be currently under review for any other publication.

We will solicit papers in four categories:

  1. Full Papers (up to 6 pages including references) should report reasonably mature work in human-centered sensing, networking, or multi-device systems. These papers are expected to demonstrate concrete and reproducible results, even if the scale may be limited.
  2. Experience Papers (up to 4 pages including references) should present experiences with the implementation, deployment, and operation of novel sensing or networking technologies and systems for human-centered applications. Desirable papers are expected to include real data and descriptions of practical lessons learned.
  3. Short Papers (up to 2 pages including references) are encouraged to report novel and creative ideas that have yet to produce concrete research results but are at a stage where community feedback would be useful.
  4. Short Papers(up to 2 pages including references) of papers that have been presented at SenSys are welcome to obtain feedback from the dedicated human sensing community. These should be entitled “Excerpt of SENSYS PAPER TITLE"

All papers will be at most 6 single-spaced 8.5” x 11” pages with 10-pt font size in two-column format, including figures, tables, and references. All submissions must use the LaTeX (preferred) or Word styles found here. LaTeX submissions should use the acmart.cls template (sigconf option), with the 10-pt font. All of the accepted papers (regardless of category) will be included in the ACM Digital Library. All papers will be digitally available through the workshop website, and the ACM Sensys 2024 Adjunct Proceedings. We will offer a "Best Paper" award, sponsored by Nokia Bell Labs, to one of the accepted papers.

Please submit your papers via this link - https://humansys24.hotcrp.com/

Important dates

  • Paper submission due: September 4th, 2024, 23:59 AOE September 26th, 2024, 23:59 AOE (Final Extension)
  • Workshop paper notification: October 2nd, 2024, 23:59 AOE
  • Camera-ready: October 7th, 2024, 23:59 AOE
  • Workshop date: November 4th, 2024 (full day)

Organization

General Chairs

  • Yiwen Dong (ywdong@stanford.edu, Stanford University, USA)
  • Zhi Wang (zjuwangzhi@zju.edu.cn, Zhejiang University, China)
  • Alessandro Montanari (alessandro.montanari@nokia-bell-labs.com, Nokia Bell Labs, UK)
  • Danny Hughes (danny.hughes@kuleuven.be, KU Leuven, Belgium)

Program Chairs

  • Yang Liu (yang.16.liu@nokia.com, Nokia Bell Labs, UK)
  • Ashok Samraj Thangarajan (ashok.thangarajan@nokia-bell-labs.com, Nokia Bell Labs, UK)
  • Sara Khalifa (sara.khalifa@qut.edu.au, Queensland University of Technology, Australia)
  • Jingping Nie (jn2551@columbia.edu, Columbia University)

Publication Chair

  • Jingxiao Liu (jingxiao@mit.edu, Massachusetts Institute of Technology, USA)

Publicity and Social Media Chair

  • Harshvardhan Takawale (htakawal@umd.edu, University of Maryland College Park, USA)

Technical Program Committee

  • Khaldoon Al-Naimi (Nokia Bell Labs, UK)
  • Yang Liu (University of Cambridge, UK)
  • SiYoung Jang (Nokia Bell Labs, UK)
  • Marios Constantinides (CYENS Centre of Excellence, Cyprus)
  • Andrea Ferlini (Nokia Bell Labs, UK)
  • Dong Ma (Singapore Management University, Singapore)
  • Ananta Narayanan Balaji (Nokia Bell Labs, UK)
  • Ting Dang (The University of Melbourne, Australia)
  • Moid Sandhu (The University of Queensland, Australia)
  • Rajashekar Reddy Chinthalapani (National University of Singapore, Singapore)

Venue

HumanSys 2024 will be held as a joint workshop in conjunction with ACM SenSys and ACM BuildSys 2024 in Hangzhou, China.

For further information on accommodation, VISA, and travel arrangements, please find more details on the SenSys website at https://sensys.acm.org/2024/ and BuildSys website at https://buildsys.acm.org/2024/a

Registration

HumanSys 2024 will be held as a joint workshop in conjunction with ACM SenSys and ACM BuildSys 2024 in Hangzhou, China.

Please visit https://sensys.acm.org/2024/ and https://buildsys.acm.org/2024/ for more information.