Publications

Simple Camera-to-2D-LiDAR Calibration Method for General Use

Published in ISVC 2020, 2020

As systems that utilize computer vision move into the public domain, methods of calibration need to become easier to use. Though multi-plane LiDAR systems have proven to be useful for vehicles and large robotic platforms, many smaller platforms and low-cost solutions still require 2D LiDAR combined with RGB cameras. Current methods of calibrating these sensors make assumptions about camera and laser placement and/or require complex calibration routines. In this paper we propose a new method of feature correspondence in the two sensors and an optimization method capable of using a calibration target with unknown lengths in its geometry. Our system is designed with an inexperienced layperson as the intended user, which has led us to remove as many assumptions about both the target and laser as possible. We show that our system is capable of calibrating the 2-sensor system from a single sample in configurations other methods are unable to handle.

Recommended citation: Palmer A.H., Peterson C., Blankenburg J., Feil-Seifer D., Nicolescu M. (2020) Simple Camera-to-2D-LiDAR Calibration Method for General Use. In: Bebis G. et al. (eds) Advances in Visual Computing. ISVC 2020. Lecture Notes in Computer Science, vol 12510. Springer, Cham. https://doi.org/10.1007/978-3-030-64559-5_15 https://link.springer.com/chapter/10.1007/978-3-030-64559-5_15

Person Profiles and Sensor Calibration for Intent Recognition in Socially Aware Navigation

Published in ScholarWorks, 2020

Earlier work in the field of intent recognition used laser and cameras to track people and extract physical information to train and test models of intention. With the progression of computation abilities, Neural Networks have made it possible to extract additional information that earlier work was not able to take advantage of, primarily person pose information. With this new information, intent recognition systems should be able to differentiate in finer detail between neighboring intents. We combine all the available information for both pose and movement descriptors to generate profiles about each person in the scene. This new pose information also makes it possible to track people in laser data more reliably. To do this tracking well, calibration between the sensors in critical, so we propose an iterative method for calibration that can produce a result not just for robots with sensors aligned toward the face of the target but also for sensor arrays that may observe the target from disparate directions. The calibration method we developed is only one of just a few current methods that can solve the calibration of a laser range finder and an RGB camera with only a single sample and unknown target dimensions.

Recommended citation: Andrew Palmer. (2020). "Person Profiles and Sensor Calibration for Intent Recognition in Socially Aware Navigation." ScholarWorks UNR,Reno,Nevada,USA May 2020. http://hdl.handle.net/11714/7401

Learning of Complex-Structured Tasks from Verbal Instruction

Published in Humanoids, 2019

This paper presents a novel approach to robot task learning from language-based instructions, which focuses on increasing the complexity of task representations that can be taught through verbal instruction. The major proposed contribution is the development of a framework for directly mapping a complex verbal instruction to an executable task representation, from a single training experience. The method can handle the following types of complexities: 1) instructions that use conjunctions to convey complex execution constraints (such as alternative paths of execution, sequential or nonordering constraints, as well as hierarchical representations) and 2) instructions that use prepositions and multiple adjectives to specify action/object parameters relevant for the task. Specific algorithms have been developed for handling conjunctions, adjectives and prepositions as well as for translating the parsed instructions into parameterized executable task representations. The paper describes validation experiments with a PR2 humanoid robot learning new tasks from verbal instruction, as well as an additional range of utterances that can be parsed into executable controllers by the proposed system.

Recommended citation: Monica Nicolescu, Natalie Arnold, Janelle Blankenburg, David Feil-Seifer, Santosh Balajee Banisetty, Mircea Nicolescu, Andrew Palmer, Thor Monteverde. (2019). "Learning of Complex-Structured Tasks from Verbal Instruction." to appear in the 2019 IEEE-RAS International Conference on Humanoid Robots, Toronto, Canada, October 2019. https://rrl.cse.unr.edu/en/pubs/?pub=83/

Perception of Social Intelligence in Robots Performing False-Belief Tasks

Published in RO-MAN, 2019

This study evaluated how a robot demonstrating a Theory of Mind (ToM) influenced human perception of social intelligence and animacy in a human-robot interaction. Data was gathered through an online survey where participants watched a video depicting a NAO robot either failing or passing the Sally-Anne false-belief task. Participants (N = 60) were randomly assigned to either the Pass or Fail condition. A Perceived Social Intelligence Survey and the Perceived Intelligence and Animacy subsections of the Godspeed Questionnaire were used as measures. The Godspeed was given before viewing the task to measure participant expectations, and again after to test changes in opinion. Our findings show that robots demonstrating ToM significantly increase perceived social intelligence, while robots demonstrating ToM deficiencies are perceived as less socially intelligent.

Recommended citation: Stephanie Sturgeon, Andrew Palmer, Janelle Blankenburg, and David Feil-Seifer. (2019). "Perception of Social Intelligence in Robots Performing False-Belief Tasks." to appear in the 28th IEEE International Conference on Robot and Human Interactive Communication – RO-MAN, New Delhi, India, October 2019. https://rrl.cse.unr.edu/media/documents/2019/Stephanie_REU_Perceived_Intelligence_and_Animacy_in_Robots_1.pdf