CAM-STITCH: Trajectory Cavity Stitching Method for Stereo Vision Cameras in a Public Building

Anooshmita Das, Emil Stubbe Kolvig-Raun, Mikkel Baun Kjærgaard

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Transforming the sensor data from building systems into meaningful information could be used to build data-driven reference models for occupant behavior and actions using Machine Learning (ML) and Deep Learning (DL) techniques. These models can also be implemented for different notable applications such as - controlling heating, ventilation and air conditioning (HVAC) parameters, automated lighting, safety and security, and efficient space utilization. In this paper, we deployed 3D Stereo Vision Cameras from Xovis in a public building to capture trajectory data for multiple occupants. If there are inconsistencies/a gap in the field of view (FoV) of the cameras, continuous tracking of multiple occupants becomes quite challenging. If the inconsistencies persist, a single occupant is assigned multiple occupant-ID's, which is misleading and inaccurate for both occupancy count and tracking measurements within the monitored area. To mitigate and overcome the inconsistencies in the FoV of the deployed cameras, we propose the CAM-STITCH algorithm, which would enable multi-sensor stitching of the occupant trajectories using a variant of Recurrent Neural Network known as Long Short-Term Memory (LSTM) model. The CAM-STITCH algorithm is evaluated by calculating the Root Mean Squared Error (RMSE) between the measured and predicted position coordinates (x, y), which resulted in an average RMSE of 9.53 centimeters on trajectory 1 and an average RMSE of 12.72 centimeters on trajectory 2 for multiple occupants. CAM-STITCH is also designed to overcome dynamic occlusions posed by the 3D Stereo Vision Cameras. The proposed algorithm CAM-STITCH can ensure that the gathered data from the building is reliable for accurate trajectory measurements and can further assist in performing intelligent building operations.
Original languageEnglish
Title of host publicationProceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things
Place of PublicationNew York
PublisherAssociation for Computing Machinery
Publication date10. Nov 2019
Pages8-14
ISBN (Print) 9781450370134
DOIs
Publication statusPublished - 10. Nov 2019
Event17th ACM Conference on Embedded Networked Sensor Systems - New York, United States
Duration: 10. Nov 201913. Nov 2019

Conference

Conference17th ACM Conference on Embedded Networked Sensor Systems
CountryUnited States
CityNew York
Period10/11/201913/11/2019

Fingerprint

Stereo vision
Computer aided manufacturing
Cameras
Trajectories
Intelligent buildings
Recurrent neural networks
Sensors
Air conditioning
Ventilation
Learning systems
Lighting
Heating

Keywords

  • Occupancy Presence, Sensing Modalities, Pattern Recognition, Trajectory Cavity Stitching, Deep Learning, LSTM

Cite this

Das, A., Kolvig-Raun, E. S., & Kjærgaard, M. B. (2019). CAM-STITCH: Trajectory Cavity Stitching Method for Stereo Vision Cameras in a Public Building. In Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things (pp. 8-14). New York: Association for Computing Machinery. https://doi.org/10.1145/3363347.3363358
Das, Anooshmita ; Kolvig-Raun, Emil Stubbe ; Kjærgaard, Mikkel Baun. / CAM-STITCH : Trajectory Cavity Stitching Method for Stereo Vision Cameras in a Public Building. Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things. New York : Association for Computing Machinery, 2019. pp. 8-14
@inproceedings{539434828f8b490e9d91388d2aa9d528,
title = "CAM-STITCH: Trajectory Cavity Stitching Method for Stereo Vision Cameras in a Public Building",
abstract = "Transforming the sensor data from building systems into meaningful information could be used to build data-driven reference models for occupant behavior and actions using Machine Learning (ML) and Deep Learning (DL) techniques. These models can also be implemented for different notable applications such as - controlling heating, ventilation and air conditioning (HVAC) parameters, automated lighting, safety and security, and efficient space utilization. In this paper, we deployed 3D Stereo Vision Cameras from Xovis in a public building to capture trajectory data for multiple occupants. If there are inconsistencies/a gap in the field of view (FoV) of the cameras, continuous tracking of multiple occupants becomes quite challenging. If the inconsistencies persist, a single occupant is assigned multiple occupant-ID's, which is misleading and inaccurate for both occupancy count and tracking measurements within the monitored area. To mitigate and overcome the inconsistencies in the FoV of the deployed cameras, we propose the CAM-STITCH algorithm, which would enable multi-sensor stitching of the occupant trajectories using a variant of Recurrent Neural Network known as Long Short-Term Memory (LSTM) model. The CAM-STITCH algorithm is evaluated by calculating the Root Mean Squared Error (RMSE) between the measured and predicted position coordinates (x, y), which resulted in an average RMSE of 9.53 centimeters on trajectory 1 and an average RMSE of 12.72 centimeters on trajectory 2 for multiple occupants. CAM-STITCH is also designed to overcome dynamic occlusions posed by the 3D Stereo Vision Cameras. The proposed algorithm CAM-STITCH can ensure that the gathered data from the building is reliable for accurate trajectory measurements and can further assist in performing intelligent building operations.",
keywords = "Occupancy Presence, Sensing Modalities, Pattern Recognition, Trajectory Cavity Stitching, Deep Learning, LSTM",
author = "Anooshmita Das and Kolvig-Raun, {Emil Stubbe} and Kj{\ae}rgaard, {Mikkel Baun}",
year = "2019",
month = "11",
day = "10",
doi = "10.1145/3363347.3363358",
language = "English",
isbn = "9781450370134",
pages = "8--14",
booktitle = "Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things",
publisher = "Association for Computing Machinery",
address = "United States",

}

Das, A, Kolvig-Raun, ES & Kjærgaard, MB 2019, CAM-STITCH: Trajectory Cavity Stitching Method for Stereo Vision Cameras in a Public Building. in Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things. Association for Computing Machinery, New York, pp. 8-14, 17th ACM Conference on Embedded Networked Sensor Systems, New York, United States, 10/11/2019. https://doi.org/10.1145/3363347.3363358

CAM-STITCH : Trajectory Cavity Stitching Method for Stereo Vision Cameras in a Public Building. / Das, Anooshmita; Kolvig-Raun, Emil Stubbe; Kjærgaard, Mikkel Baun.

Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things. New York : Association for Computing Machinery, 2019. p. 8-14.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

TY - GEN

T1 - CAM-STITCH

T2 - Trajectory Cavity Stitching Method for Stereo Vision Cameras in a Public Building

AU - Das, Anooshmita

AU - Kolvig-Raun, Emil Stubbe

AU - Kjærgaard, Mikkel Baun

PY - 2019/11/10

Y1 - 2019/11/10

N2 - Transforming the sensor data from building systems into meaningful information could be used to build data-driven reference models for occupant behavior and actions using Machine Learning (ML) and Deep Learning (DL) techniques. These models can also be implemented for different notable applications such as - controlling heating, ventilation and air conditioning (HVAC) parameters, automated lighting, safety and security, and efficient space utilization. In this paper, we deployed 3D Stereo Vision Cameras from Xovis in a public building to capture trajectory data for multiple occupants. If there are inconsistencies/a gap in the field of view (FoV) of the cameras, continuous tracking of multiple occupants becomes quite challenging. If the inconsistencies persist, a single occupant is assigned multiple occupant-ID's, which is misleading and inaccurate for both occupancy count and tracking measurements within the monitored area. To mitigate and overcome the inconsistencies in the FoV of the deployed cameras, we propose the CAM-STITCH algorithm, which would enable multi-sensor stitching of the occupant trajectories using a variant of Recurrent Neural Network known as Long Short-Term Memory (LSTM) model. The CAM-STITCH algorithm is evaluated by calculating the Root Mean Squared Error (RMSE) between the measured and predicted position coordinates (x, y), which resulted in an average RMSE of 9.53 centimeters on trajectory 1 and an average RMSE of 12.72 centimeters on trajectory 2 for multiple occupants. CAM-STITCH is also designed to overcome dynamic occlusions posed by the 3D Stereo Vision Cameras. The proposed algorithm CAM-STITCH can ensure that the gathered data from the building is reliable for accurate trajectory measurements and can further assist in performing intelligent building operations.

AB - Transforming the sensor data from building systems into meaningful information could be used to build data-driven reference models for occupant behavior and actions using Machine Learning (ML) and Deep Learning (DL) techniques. These models can also be implemented for different notable applications such as - controlling heating, ventilation and air conditioning (HVAC) parameters, automated lighting, safety and security, and efficient space utilization. In this paper, we deployed 3D Stereo Vision Cameras from Xovis in a public building to capture trajectory data for multiple occupants. If there are inconsistencies/a gap in the field of view (FoV) of the cameras, continuous tracking of multiple occupants becomes quite challenging. If the inconsistencies persist, a single occupant is assigned multiple occupant-ID's, which is misleading and inaccurate for both occupancy count and tracking measurements within the monitored area. To mitigate and overcome the inconsistencies in the FoV of the deployed cameras, we propose the CAM-STITCH algorithm, which would enable multi-sensor stitching of the occupant trajectories using a variant of Recurrent Neural Network known as Long Short-Term Memory (LSTM) model. The CAM-STITCH algorithm is evaluated by calculating the Root Mean Squared Error (RMSE) between the measured and predicted position coordinates (x, y), which resulted in an average RMSE of 9.53 centimeters on trajectory 1 and an average RMSE of 12.72 centimeters on trajectory 2 for multiple occupants. CAM-STITCH is also designed to overcome dynamic occlusions posed by the 3D Stereo Vision Cameras. The proposed algorithm CAM-STITCH can ensure that the gathered data from the building is reliable for accurate trajectory measurements and can further assist in performing intelligent building operations.

KW - Occupancy Presence, Sensing Modalities, Pattern Recognition, Trajectory Cavity Stitching, Deep Learning, LSTM

U2 - 10.1145/3363347.3363358

DO - 10.1145/3363347.3363358

M3 - Article in proceedings

SN - 9781450370134

SP - 8

EP - 14

BT - Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things

PB - Association for Computing Machinery

CY - New York

ER -

Das A, Kolvig-Raun ES, Kjærgaard MB. CAM-STITCH: Trajectory Cavity Stitching Method for Stereo Vision Cameras in a Public Building. In Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things. New York: Association for Computing Machinery. 2019. p. 8-14 https://doi.org/10.1145/3363347.3363358