Machine intelligence for visual classification and detection in
human-robot collaboration tasks and medical applications.

Projects

Convolutional Neural Networks

I am currently developing convolutional neural networks for machine intelligence, with a focus on visual classification and detection, human-robot collaboration, and also medical visual classification and detection applications.

Computationally Inexpensive Inverse Kinematics

A typical trade-off among existing inverse kinematic (IK) algorithms is the computational complexity versus how natural the resulting motion appears. Natural motion tends to come from computationally expensive algorithms, while the less computationally expensive algorithms produce a less natural looking motion. Less computation for a more natural resulting motion is typically never the case.

A newer approach however promises the best of both worlds; a computationally inexpensive IK that also produces organic and natural looking results. It is called FABRIK (Forward And Backward Reaching Inverse Kinematics, Aristidou, A. & Lasenby, J. 2009), an IK solver aimed at animation and robotics (although not yet applied to physical robots to my knowledge). Presented here is my algorithm based on FABRIK, developed to work simultaneously within simulation and on physical robots.


Semi-Autonomous Multirobot Multiuser System

When compared to single robots, multirobot systems tend to be more fault tolerant and robust due to their distributed control and no single point of failure. Multirobot systems are also able to cover a larger area, and individual units tend to be mechanically simpler and less expensive to replace.

The goal of this work is to develop a semi-autonomous multirobot system with single and multiuser access. A dynamic and flexible structure will allow for real-time assembly of one or more robot groups, and machine intelligence will be used for varying levels of autonomous behaviour among group individuals. Intuitive multirobot interaction and control interfaces will provide a more natural way for one or more humans to control and participate in a multirobot system, with cooperative participation handled by simple drop-in and drop-out capabilities for shared dynamic experiences.




Intuitive Human-Robot Interaction and Control Interfaces

Robot interaction and control should be intuitive, easy, and know when to stay out of the way. This is why I am developing intuitive human-robot interaction and control interfaces, to provide more natural ways of controlling and participating in multirobot systems.

I have already developed prototype hardware and software on which to expand and improve. The initial prototype is a couple of years old and I am in the process of resurrecting the hardware and software for renewed demonstration purposes here.

About Me

I am a self motivated and versatile software engineer with experience across multiple industries and languages, and a proven ability to quickly and easily learn new technologies. I specialise in the areas of healthcare/medical, backend interfacing and integration, data visualisation and simulation, machine learning and neural networks, robotics, and GIS. My industry experience includes U.S. Healthcare and Government Services, U.S. Financial Services, and U.K. Defence and Government Services.

In addition to my professional career I am currently developing convolutional neural networks for machine intelligence, with a focus on visual classification and detection, human-robot collaboration, and also medical visual classification and detection applications.

For more information please visit my LinkedIn page.

Contact Me

I am actively seeking remote software development work, preferably contract but I'll consider full or part time too. I have remote work experience and the professional, dependable, and self motivated work ethic required for such roles.

Please connect with me via LinkedIn or email stephen@augmentedrobotics.com.