The COGNITO system is complex and involves novel customised hardware with multiple sensors, which deliver a large amount of data at high speed. The software system deals with streams of heterogeneous sensor data in real-time. It monitors user activity in relation to the workspace, links this information to the underlying workflow patterns and moreover offers rendering capabilities in an augmented reality display. The system is mobile and wearable system and is built upon generic, learning-based algorithms.
COGNITO building blocks
Five main building blocks of the COGNITO system have been identified:
- On-Body Sensor Network (BSN) and Head-Mounted Display (HMD): inertial measurement units (IMUs), cameras, and RGB-D cameras are combined in a sensor network worn on the body. A monocular see-through head-mounted display with integrated eye-tracking cameras provides the system feed-back and user assistance information during workflow execution.
- Low-Level Sensor Processing: processes the measurements from the BSN in order to provide information about the operatorís activity (upper body/limb motion, hand postures/positions) and the workspace (positions of relevant objects) in a global workspace coordinate frame.
- Workflow Recovery and Monitoring: processes the instantaneous information from the low-level sensor processing and provides an estimate of the current and a prediction for the next atomic action in the considered workflow model. During the learning phase, it builds a statistical workflow model from the received data.
- Biomechanical Analysis: receives a sequence of postures and forces for the operator's upper body and hands and performs two biomechanical evaluations. An online global estimation is done based on the ergonomic tool Rapid Upper Limb Assessment (RULA) index. Assessing the RULA score online permits the worker to modify his posture in real-time when the movement leads to an important musculoskeletal disorders risk exposure. An offline local estimation of muscle forces and articular loads for hands and forearms is done based on a detailed musculoskeletal model.
- User Interface: provides the means for editing recovered workflows and enriching them with descriptive information as well as aiding the user during task execution with context sensitive user feedback using AR techniques.