The 6th IEEE Workshop on
Human-in-the-Loop Methods and Future of Work in BigData
(IEEE HMData 2022)

co-located with IEEE Bigdata 2022
Osaka, Japan, Dec. 17 or 20 (Planned)
(Online Workshop)

Photo from the IEEE Bigdata 2022 Page
About Keynotes Dates and Submission Program Organization HMData 2021 IEEE Bigdata 2022

About IEEE HMData 2022


HMData workshop, which originally started as the "Human-Machine collaboration in BigData" workshop, will investigate the opportunities and challenges in human machine collaboration in work with bigdata, which are described by two terms: Human-in- the-Loop Methods and Future of Work. Human-in-the-Loop is a term focusing on the employer's viewpoint while Future of Work focuses more on worker's viewpoint, in both of which the division of labor among humans and machines is a key issue. This area is likely to be heavily AI driven, and we intend to invite papers covering the following aspects, (a) Capturing human capabilities through intelligent models and how to adapt them through changing perceptions, needs, and skills. (2) High level tools that provide the ability for all stakeholders in the new ecosystem, including regulators for policies and AI workers, to specify their requirements. (3) system design and engineering of job platforms for collection, storage, retrieval, and analysis of data deluge about workers, jobs, and their activities. (4) Benchmarking and the development of appropriate metrics to measure system performance as well as human aspects, such as satisfaction, capital advancement, and equity.

We welcome any interesting ideas and results on any relevant topics, but this workshop encourages submitting papers the results of which have been or will be implemented as platforms, tools and libraries. This year, we plan to have a thematic session on improving the interoperability of tools on Human-in-the-loop Methods and Future-of-Work. We also solicit practitioner papers as well as research papers, in order to facilitate discussion among researchers have solutions and practitioners who know problems. All papers accepted for the workshop will be included in the Workshop Proceedings published by the IEEE Computer Society Press, made available at the Conference.

Journal publication of selected papers

After the conference, high quality papers will be selected and recommended for possible publication in a special issue of Information Systems Frontiers, Springer.


This workshop covers a wide range of topics of human-machine collaboration in work with bigdata. Keywords include: crowdsourcing, collaborative recommendation, crowdsensing, workflow model for humans and machines, incentives, human-assisted bigdata analysis, bigdata-human interaction, human-machine collaboration in real-world applications (such as natural disaster response, education, and citizen science), and ELSI in Human-in-the-loop systems and Future of Work. We expect submissions to address some of the following issues:
  1. capturing human characteristics and capabilities,
  2. stakeholder requirement specification,
  3. social processes around the human-in-the-loop systems,
  4. platforms and ecosystems,
  5. computation capabilities, and
  6. benchmarks and metrics for human-in-the-loop systems and Future of Work


Mitigating Biases in Crowdsourcing Data Collection
Ming Yin (Purdue University)

Abstract: Data has become the secret sauce for the rapid progress of artificial intelligence (AI). Over the past decade, crowdsourcing has become a prevalent paradigm for obtaining data from people to enhance machine intelligence. However, there is a growing line of literature showing that data collected from crowdsourcing efforts could have significant biases, which not only decreases the quality of the data, but may even negatively impact the downstream algorithmic models built based on these data. Many factors could contribute to the biases in crowdsourced data, including the composition of the dataset on which annotations are solicited from the crowd (e.g., sampling bias of the dataset), and the cognitive and behavioral limitations that the crowd is subject to when providing annotations (e.g., cognitive bias, affective bias, social bias). In this talk, I'll present some of our recent efforts in mitigating biases throughout the crowdsourcing data collection lifecycle.

Bio: Ming Yin is an Assistant Professor in the Department of Computer Science, Purdue University. Her research broadly connects to the fields of human-computer interaction, applied artificial intelligence and machine learning, computational social science, and behavioral sciences. She uses both experimental and computational approaches to examine how to better utilize the wisdom of crowd to enhance machine intelligence (i.e., crowdsourcing and social computing), and how to better design intelligent systems that people can understand, trust and engage with effectively (i.e., human-AI interaction). Prior to Purdue, She spent a year at Microsoft Research New York City as a postdoctoral researcher in the Computational Social Science group. She completed her Ph.D. in Computer Science at Harvard University, and received her bachelor degree from Tsinghua University, Beijing, China.



Important Dates

  • Oct 14 (Fri), 2022: Due date for workshop papers submission Extended
    (Authors have to submit the title and abstract by Oct. 7 (Fri))
  • Nov 10 (Thu), 2022: Notification of paper acceptance to authors
  • Nov 27 (Sun), 2022: Camera-ready of accepted papers Extended
  • Dec 17-20 (Sat-Tue), 2022: Workshops


All submissions must be submitted electorically through the submission page (This will be open in August). Please prefix your submission category such as [Research Paper] to the Title of Paper field in the submission page. For example, if you would like to submit a project-in-progress paper "Crowd-centric Approach to Digital Archive Maintenance," you have to put "[project-in-progress paper] Crowd-centric Approach to Digital Archive Maintenance" into the Title of Paper field.
All papers accepted for the workshop will be included in the Workshop Proceedings published by the IEEE Computer Society Press, made available at the Conference.

Submission Categories

  • Research Papers (*) (long presentation): They report significant and original results relevant to the scope of this workshop. We solicit innovative or thought-provoking work but they do not necessarily have to reach the level of completion. The expected length is between 4 and 6 pages. The maximum length is 10 pages, though the paper should be commensurate with the size of the contribution.
  • Practitioner papers (*)(long presentation): They present interesting problems that require human-in-the-loop solutions in a variety of application domains, or present the interesting results of applying existing human-in-the-loop solutions to their domains. The expected length is between 4 and 6 pages. The maximum length is 10 pages, though the paper should be commensurate with the size of the contribution.
  • Project-in-progress papers (short presentation): They present the goals, challenges, and preliminary results of research or real-world projects in progress. The maximum length is 3 pages.
(*) Some of the papers submitted to the research or practitioner paper categories may be accepted as project-in-progress papers and allotted to short presentation slots.


Papers should be formatted to IEEE Computer Society Proceedings Manuscript Formatting Guidelines in the IEEE Bigdata 2022 CFP page



Senjuti Basu Roy (NJIT)
Alex Quinn (Purdue University)
Atsuyuki Morihsima (University of Tsukuba)

Program Committee

  • Yukino Baba (The University of Tokyo)
  • Wolf-Tilo Balke (Technische Universitaet Braunschweig)
  • Ria Mae Borromeo (University of the Philippines Open University)
  • Francois Charoy (University of Lorraine, Inria, CNRS)
  • Marina Danilevsky (IBM Research - Almaden)
  • Ashraf Dewan (Curtin University)
  • Gianluca Demartini (University of Queensland)
  • David Gross Amblard (Rennes 1 University / IRISA Lab)
  • Itaru Kitahara (University of Tsukuba)
  • Vana Kalogeraki (Athens University of Economics and Business)
  • Masaki Matsubara (University of Tsukuba)
  • Shigeo Matsubara (Osaka University)
  • Satoshi Oyama (Hokkaido University)
  • Raghav Rao (University of Texas at San Antonio)
  • Yu Suzuki (Gifu University)
  • Keishi Tajima (Kyoto University)
  • Hisashi Toriya (Akita University)
  • Demetris Zeinalipour (University of Cyprus)