UNSW RoboCup@Home SPL Proposal Webpage

The UNSW Australia RoboCup @Home SPL team was established in 2016. This webpage is the complement to our written proposal to compete in the RoboCup@Home SPL.

Contents

Team Members

Relevant Scientific Research and Contributions

Position Tracking and SLAM

Our research group has a long history of research in position tracking and SLAM, particularly for  autonomous robots in the RoboCupRescue Robot competition. Most recently, our research has focused on implementing position tracking and SLAM algorithms on GPUs using full 3D information to produce correctly aligned and accurate 3D maps. Much of this work carries across to RoboCup@Home, since accurate 3D position tracking and mapping for navigation and obstacle avoidance through the home. Combined with our work on spatial reasoning, this also assists in planning and model-based object recognition.

Publications include:

  • A. Ratter, C. Sammut. Fused 2D/3D Position Tracking for Robust SLAM on Mobile Robots. International Conference on Intelligent Robots and Systems, 2015.
  • A. Ratter, C. Sammut. Local Map Based Graph SLAM with Hierarchical Loop Closure and Optimisation. Australasian Conference on Robotics and Automation, 2015.
  • A. Ratter and C. Sammut, GPU Accelerated Parallel Occupancy Voxel Based ICP for Position Tracking, 2013 Australasian Conference on Robotics and Automation, Sydney, Australia, pp. 394–403, 2013.
  • A. Ratter, C. Sammut, M. McGill. GPU Accelerated Graph SLAM and Occupancy Voxel Based ICP For Encoder-Free Mobile Robots. International Conference on Intelligent Robots and Systems, 2013
  • A. Milstein, M. McGill, T. Wiley, R. Salleh, and C. Sammut, A Method for Fast Encoder-Free Mapping in Unstructured Environments, Journal of Fields Robotics: Special Issue on Safety, Security, and Rescue Robotics, vol. 28, no. 6, pp. 817–831, 2011
  • A. Milstein, M. McGill, T. Wiley, R. Salleh, and C. Sammut, Occupancy voxel metric based iterative closest point for position tracking in 3D environments, IEEE International Conference on Robotics and Automation, pp. 4048 – 4053, 2011.
2D SLAM Map of the CSE Robotics Lab 3D SLAM Map of the CSE Robotics Lab
2D SLAM Map of the CSE Robotics Lab 3D SLAM Map of the CSE Robotics Lab

 

Software Architectures for Autonomous Robots

Our research group was an early proponent of blackboard architectures for sharing data between a robot’s cognitive processes. A blackboard provides a global shared memory facility that acts as both a communications infrastructure as well as a storage mechanism for a robot. The blackboard typically offers both publisher-subscriber based asynchronous communications as well as synchronous query-answering communications. The storage of the blackboard provide both short-term and long-term memory.

More recently our group has developed a novel meta-model for formalising cognitive hierarchies. At its most basic level a cognitive hierarchy consists of a set of nodes connected together in a hierarchical graph. Every node in the hierarchy is involved in world-modelling and behaviour generation and represents a particular abstraction of the world; with the lowest-level node as a proxy for the (actual) external world.

Our formalisation is described as a meta-model as it provides the formal structure for a node and hierarchy but does not commit to any particular representation for node internals. For example, it requires that a node consists of a set of possible beliefs but does not restrict the form of those beliefs. In this way the formalisation provides the flexibility to instantiate and integrate nodes of arbitrary structures, such as symbolic as well as sub-symbolic reasoning mechanisms. Furthermore, the formal nature of the model provides the basis from which to formally prove properties of the system as a whole.

 Publications include:

  • B. Hengst, C. Keith, M. Pagnucco, D. Rajaratnam, P. Robinson, C. Sammut, and M. Thielscher, A framework for integrating symbolic and sub-symbolic representations. 25th International Joint Conference on Artificial Intelligence IJCAI-16. New York, New York, USA, 2016.
  • D. Rajaratnam, B. Hengst, M. Pagnucco, C. Sammut and M. Thielscher. Composability in Cognitive Hierarchies. Proceedings of the Australasian Joint Conference on Artificial Intelligence, 2016.
  • C. Sammut, The child machine vs the world brain. Informatica, vol. 37 pp. 3–8, 2013.
  • A. Haber and C. Sammut, A Cognitive Architecture for Autonomous Robots, Advances in Cognitive Systems, vol. 2, pp. 257–275, 2012.
  • A. Haber, and C. Sammut, Towards a cognitive architecture for extended robot autonomy. Advances in Cognitive Systems, Palo Alto, CA, 2012.
  • C. Sammut, When do robots have to think? Advances in Cognitive Systems vol. 1 pp. 73–81, 2012.
Mala Architecture An example cognitive hierarchy
Mala Architecture  An example cognitive hierarchy 
 
Demonstration of instantiating our cognitive hierarchy to solve a blocksworld puzzle  

Learning Robot Behaviours

A major focus of our group is developing machine learning techniques for robotics. Manually programming new robot behaviours is very challenging. A robot that can learn its own behaviours can better and more quickly adapt to new situations and tasks, without relying on domain experts and programmers. This learning is very beneficial for robots for home environments, as end users could instruct their robots to learn new skills, rather than rely and wait on teams of experts to design and implement new software.

We have developed several robot learning systems using learning from demonstration, explanation-based learning and reinforcement learning. Often these are combined, for example, using observations of another agent to create an abstract description of a behaviour and then using that to guide trial-and-error learning to refine the behaviour. This hybrid paradigm has been applied to locomotion for bipedal robots and rescue robots. It has also been used to learn how to use simple objects as tools, for example, learning that a hook shaped object can be used to retrieve another object that is out of reach of the robot’s hand.

Publications include:

  • T. Wiley, C. Sammut, B. Hengst, and I. Bratko, A Planning and Learning Hierarchy using Qualitative Reasoning for the On-Line Acquisition of Robotic Behaviors. Advances in Cognitive Systems. 4, pp. 93-112, 2016.
  • T. Wiley, C. Sammut, and I. Bratko. Qualitative Planning with Quantitative Constraints for Online Learning of Robotic Behaviours. Proceedings of the 28th AAAI Conference on Artificial Intelligence. Quebec City, Canada, pp. 2578-2584, 2014.
  • S. Brown, and C. Sammut, A Relational Approach to Tool-use Learning in Robots. Inductive Logic Programming, pp. 1–15. Springer Berlin Heidelberg, 2013.
  • C. Sammut, R. K.-M. Sheh, A. Haber, and H. Wicaksono, The Robot Engineer, Proceedings of the 25th International Conference on Inductive Logic Programming, 2015.
  • R. K.-M. Sheh, B. Hengst, and C. Sammut, Behavioural Cloning for Driving Robots over Rough Terrain, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 732–737, 2011.
  • S. Brown and C. Sammut, Learning tool use in robots, Advances in Cognitive Systems: Papers from the AAAI Fall Symposium, Menlo Park, CA, USA, pp. 58–65, 2011.
Our hierarchy for learning of new robot behaviours  
Our hierarchy for learning of new robot behaviours  
Tool-use learning demonstration Demonstration of a robot learning to climb a low-step, a high-step and staircase

General Game-Playing Robots

Our research group has particular expertise in the area of General Game-Playing (GGP) and we have been active in extending this notion to the robotics domain. GGP is the attempt to create AI systems that can understand and learn to play new games without human intervention. Extending GGP to the robotics domain requires moving beyond a virtual/abstract environment to consideration of game playing within an embodied physical environment.

This work is relevant to Robocup@Home because many domestic robotic tasks have game-like properties, requiring the robot to reason about the goals of other agents as well as adapting to unexpected changes in the environment. For example, a domestic robot tasked with fetching an item has to consider the possibility that the item may not be where it expects, or that the human operator may change locations after issuing the request. Viewing such a task as a game can provide a framework for developing more natural and intuitive robot behaviours.

Publications include:

  • D. Rajaratnam and M. Thielscher. Execution Monitoring as Meta-Games for General Game-Playing Robots.  Proceedings of the 25th International Joint Conference on Artificial Intelligence, 2015.
  • D. Rajaratnam and M. Thielscher. Towards General Game-Playing Robots: Models, Architectures and Game Controller. Proceedings of Australasian Joint Conference on Artificial Intelligence, 2013.
GGP Architecture
An architecture for a general game-playing robot. It provides the integration of the robot (ROS) components and the GGP reasoner

Human-Robot Interaction

A fundamental aspect of social robots, such as those designed to be deployed in homes, is the interaction between a human and a robot. This interaction includes how the humans and robots communicate with each other through aspects such as speech, sound, music, gestures, body movements, proximity, facial expressions, body language and touch. Getting the design correct is crucial, as a poorly designed interaction will discourage people from using the robot, thereby making the robot useless. Our research studies methods to improve human-robot interaction, with a focus on using touch or gestures, and reading human emotions.

A large body of our research in human-robot interaction is conducted at the Creative Robotics Lab at UNSW, founded in 2013. This lab is run by A/Prof. Mari Velonaki who was voted one of the top 25 women in robotics that you “need to know about” by RoboHub.org. Mari also started the first centre for social robotics at the Australian Centre for Field Robotics in 2006.

Publications include:

  • A. Ball, D. Rye, D. Silvera-Tawil, and M. Velonaki, Group vs. individual comfort when a robot approaches, in A. Tapus, E. André, J-C. Martin, F. Ferland & M. Ammi (eds.), Social Robotics, Springer, pp. 41-50, 2015.
  • D. Silvera-Tawil, D. Rye, M. Soleimani, and Velonaki, Electrical Impedance Tomography for Artificial Sensitive Robotic Skin: A Review, IEEE Sensors Journal, vol. 15, no.4, pp. 2001-2016, 2015.
  • D. Silvera-Tawil, D. Rye, and M. Velonaki, Artificial Skin and Tactile Sensing for Socially Interactive Robots: A Review, Robotics and Autonomous Systems, vol. 63, no. 3, pp. 230-243, 2015.
  • D. Silvera-Tawil, D. Rye, and M. Velonaki, Human-robot Interaction with Humanoid Diamandini using an Open Experimentation Method, Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, 2015.
  • M. Velonaki, R. Thapliya, D. Rye, D. Silvera-Tawil, and K. Watanabe, Social HRI: Overcoming Barriers through Appearance, Behaviour and Context-Based Design, Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, 2015.
  • K. S. Haring, D. Silvera-Tawil, K. Watanabe, M. Velonaki, and Y. Matsumoto, Touching an android robot: Would you do it and how? Proceedings of the International Conference on Control, Automation and Robotics, 2015.
  • M. Velonaki, Human-robot interaction in prepared environments: Introducing an element of surprise by reassigning identities in familiar object, in N. Lee (ed.), Digital Da Vinci: Computers in the Arts and Sciences, Springer, pp. 21–64, 2014.
  • D. Silvera Tawil, D. Rye and M. Velonaki, Interpretation of social touch on an artificial arm covered with an EIT-based sensitive skin, International Journal of Social Robotics, vol. 6, no 4, pp. 489-505, 2014.
Fish-Bird robots Dimandini robot
Fish-Bird interactive wheel chair display Dimandini robot for observing of human-robot interactions typically in museums
TU
Geminoid interactive conversational robot Dimandini on display at the  Victoria and Albert Museum in London in 2013

 

National Facility for Human-Robot Interaction Research

As part of our research into human-robot interaction, we are building a new National Facility of Human-Robot Interaction Research. It will be a state-of-the-art facility for non-intrusive real-time measurement of the properties that are linked to human affect and intent. The facility is led by Creative Robotics Lab at UNSW, bringing together roboticists, media artists and designers, computer scientists, psychologists and medical researchers from UNSW, University of Sydney, University of Technology Sydney and St. Vincent's Hospital.

The facility will open in the first half of 2017.

Autonomous Adaptation and Trust

Trust becomes a critical issue for humans working with robots, especially when they can autonomously learn and adapt to new situations. The behaviour of these types of machines cannot be formally verified in advance. We propose to study the change in trust for a mixed initiative task under varying degrees of transparency of the adaptation process.

The two main research contributions are:

  1. The design and development of a robotic cognitive architecture that includes the ability for the robot to adapt autonomously to a change in the task environment. We instantiate the architecture using a Baxter robot for participation in a mixed initiative task where the environment changes, requiring the robot to adapt on the job.
  2. Modelling and evaluating the evolving human-robot trust relationship as the robot learns on the job

Conversational Agents

Our AI research group has also investigated early work on software for a smart home, with a particular focus on conversation agents. Through this work we developed Framescript, which is a rule-based system for processing speech as well as other multi-modal inputs.

Our research resulted in two patent applications:

  • C. Sammut, M. W. Kadous, and M. McGill, A system and method for processing multi-modal input, Australian Patent 2006903132 (lapsed), Smart Internet Technology CRC Pty Ltd, 2006.
  • C. Sammut and M. W. Kadous A system, apparatuses, method and a computer program for facilitating sharing of electronic information Australian Patent 2005901373 (lapsed), Smart Internet Technology CRC Pty Ltd, 2005.

Publications include:

  • M. W. Kadous, and C. Sammut, Inca: A mobile conversational agent, Proceedings of the 8th Pacific Rim International Conference on Artificial Intelligence, pp. 644-653, Auckland, New Zealand, 2004.
INCA: An early conversational agent developed for the Smart Internet Technology Cooperative Research Centre Demonstration of a conversational agent controlling devices in a smart home. Smart Internet Technology Cooperative Research Centre

 

Robot Control and Reasoning

Our group has a strong background in formal logic based AI and particularly in the area of reasoning about knowledge and action. The expertise in this area extends from research into abstract representation formalisms for actions and knowledge, through to pragmatic issues of implementing reasoning for the purposes of controlling robot behaviour.

Action formalisms such as the situation calculus and the fluent calculus provide a mechanism to represent and reason about dynamic domains. This makes them well-suited for robotics, where they can be used for planning robot actions as well as execution monitoring at a high-level of abstraction. In this context it is important to have an explicit representation of the agent’s knowledge and belief, which typically only reflect an incomplete fragment of the real world and may even be incorrect. In particular it is important for the robot to be aware of what it does not know or what it is uncertain about and how new knowledge can be obtained to fill these gaps. For example, finding out what items are on a table may require the robot to inspect the table from different positions. Epistemic reasoning is also particularly important in a domestic environment where a robot has to interact with human agents and therefore has to reason not only about its own knowledge but also the knowledge of other agents.

Our group has been actively involved in efforts to bring these theories of actions and knowledge to practice. For instance, we have contributed in the development of ROSoClingo, an adaptation of a high-performance Answer Set Programming (ASP) reasoner for use in the ROS open-source robot framework, and used ASP to implement a epistemic variant of the situation calculus. We are furthermore currently working on a reasoning system for first-order epistemic reasoning.

Publications include:

  • C. Schwering, T. Niemüller, G. Lakemeyer, N. Abdo, W. Burgard. Sensor Fusion in the Epistemic Situation Calculus. Journal of Experimental & Theoretical Artificial Intelligence, 2016.
  • C. Schwering, G. Lakemeyer. Decidable Reasoning in a First-Order Logic of Limited Conditional Belief. Proceedings of the Twenty-Second European Conference on Artificial Intelligence, 2016.
  • B. Andres, D. Rajaratnam, O. Sabuncu, and T. Schaub. Integrating ASP into ROS for Reasoning in Robots.  International Conference on Logic Programming and Nonmonotonic Reasoning, 2015.
  • C. Schwering, G. Lakemeyer, M. Pagnucco. Belief Revision and Progression of Knowledge Bases in the Epistemic Situation Calculus. Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
  • C. Schwering, G. Lakemeyer. Projection in the Epistemic Situation Calculus with Belief Conditionals. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
  • D. Rajaratnam, H. J. Levesque, M. Pagnucco, M. Thielscher. Forgetting in Action. International Conference on Principles of Knowledge Representation and Reasoning, 2014.
  • M. Pagnucco, D. Rajaratnam, H. Strass, and M. Thielscher. Implementing Belief Change in the Situation Calculus and an Application. International Conference on Logic Programming and Nonmonotonic Reasoning, 2013.
  • S. Shapiro, M. Pagnucco, Y. Lespérance, and H. J. Levesque. Iterated belief change in the situation calculus. Artificial Intelligence vol. 175, no.1, pp. 165-192, 2011.
  • M. Pagnucco, D. Rajaratnam, H. Strass, and M. Thielscher. How to Plan When Being Deliberately Misled.  Automated Action Planning for Autonomous Mobile Robots, Papers from the AAAI Workshop, 2011.
Demonstration of using ROSoClingo for planning tasks with a robot in an office environment

 

Victim and Object Identification in Urban Search and Rescue

As part of our research in autonomous robots for urban search and rescue we have developed techniques and autonomous software for recognising:

  • Trapped or injured victims of the disaster, and
  • 3D features that are typical of urban disaster sites, such as walls, doors, staircases, rubble, pipes and cavities, from both dense and sparse 3D point clouds.

Similar types of object recognition and identification is a crucial element of robots for home environments, such as identifying not just small objects, but also large items of furniture.

Publications include:

  • R. Farid and C. Sammut. Plane-based object categorisation using relational learning. In Machine Learning, 94, pages 3–23, 2014.
  • T. Wiley, M. McGill, A. Milstein, R. Salleh, and C. Sammut, Spatial Correlation of Multi-sensor Features for Autonomous Victim Identification, RoboCup 2011: Robot Soccer World Cup XV, vol. 7416, Springer-Verlag Berlin Heidelberg, pp. 538–549, 2012.
  • M. McGill, R. Salleh, T. Wiley, A. Ratter, R. Farid, and C. Sammut, Virtual Reconstruction Using an Autonomous Robot, Proceedings of the 2012 International Conference on Indoor Positioning and Indoor Navigation, Sydney, Australia, 2012.
Plane-based Object Detection
Examples of plane-based object identification from dense 3D point clouds

 

Discovering Hidden Properties of Objects

Sometimes active perception is required to find properties of objects that are not directly observable. For example, knowing the centre of mass of an object may be important for grasping, but it’s location may not be obvious from the object’s shape. However, some experiments may help to determine this and other hidden properties.To avoid performing unnecessary experiments, the robot’s visual system is used to create an internal model in a physics engine, where “thought experiments” are performed to determine which experiments in the real world will yield the highest information gain. This approach has been used to discover properties such as centre of mass, uneven friction in sets of wheels and to predict simple behaviours of other robots.

Publications include:

  • O. Sushkov and C. Sammut. Active robot learning of object properties. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
  • O. Sushkov and C. Sammut. Feature segmentation for object recognition using robot manipulation. Australasian Conference on Robotics and Automation, 2011.

Tactile Sensing

Dextrous manipulation requires effective tactile sensing. Our group has developed machine learning methods for an artificial finger to acquire the ability to distinguish textures and to recognise the precursors to slippage. This enables a manipulator to hold an object lightly, without slipping. The inspiration for the finger came through RoboCup contacts with the Asada laboratory in Osaka.  We reproduced the hardware and developed new machine learning methods for recognising patterns in multivariate time-series data.

Publications include:

  • N. Jamali and C. Sammut. Slip prediction using hidden markov models: Multidimensional sensor data to symbolic temporal pattern learning. In IEEE International Conference on Robotics and Automation, 2012.
  • N. Jamali and C. Sammut. Majority voting: Material classification by tactile sensing using surface texture. IEEE Transactions on Robotics, vol. 27, no. 3, pp.508–521, 2011.

Previous Participation in RoboCup

UNSW has a long and accomplished history in RoboCup. Our history of participation is:

  • 4-Legged League (soccer): 1999 - 2006
  • Standard Platform League (soccer): 2008 - 2016
  • Real Rescue Robots League: 2005 - 2011, 2013

Our teams have seen great success in these competitions. Highlights of our results include:

  • 4-Legged League (soccer):
    • 1st Place: 2000, 2001, 2003
    • 2nd Place: 1999, 2002, 2006
    • 3rd Place: 2005
  • Standard Platform League (soccer):
    • 1st Place: 2014, 2015
    • 2nd Place: 2010
    • 3rd Place: 2011
  • Real Rescue Robots League:
    • Best-in-Class Autonomy: 2009, 2010, 2011
    • Special award for human-machine interaction: 2009
    • Best-in-Class Mobility: 2010
    • 2nd Place Autonomy: 2006
    • 3rd Place (overall): 2005

Current and previous members of our teams have also volunteered their time for RoboCup committees:

  • Claude Sammut:
    • RoboCup Federation Board of Trustees
    • RoboCup 2019 General Chair
  • Brad Hall:
    • RoboCupSoccer Standard Platform League Executive
    • RoboCupSoccer Standard Platform League Organising Committee
  • Raymond Sheh:
    • RoboCupRescue Real-Robots Executive
    • Real Rescue Robots League Committee
  • Sean Harris
    • RoboCupSoccer Standard Platform League  Organising Committee
  • Maurice Pagnucco
    • RoboCup Australia Regional Representative