Interop Summit 2017 at Googand BMVA technical meeting: Analysis and Processing of RGBD Data

Earlier this year I attended the Interop Summit and BMVA technical meeting: Analysis and Processing of RGBD Data.

Interop was hosted by DeepMind Health at Google’s London HQ. The discussions and lectures were attended by clinicians and technical experts and revolved around interoperability beween different healthcare solutions; data governance, where the patient is in charge of their data; and using AI to develop new intelligent healthcare solutions. This is one of the main aims of DeepMind’s Streams app, using big data and AI to improve health, enhance care and reduce costs (NHS Five Year Forward View). However there are hurdles to overcome before AI can be truly realised within healthcare providers.

Dr Andrew Gilbert hosted a BMVA conference on ‘Analysis and Proccessing of RGBD Data’. It was interesting to see how other researchers had tackled pose estimation. I certainly enjoyed seeing demonstrations of algorithms that combined discriminative and generative models to estimate joint positions. There was one demonstration of the structure depth sensor performing SLAM in real-time (30 FPS+) on an iPad.

Turning algorithms in scientific papers into code.

Here’s a useful read on implementing algorithms from scientific papers. Some of the key points:

  • Be critical in finding the right paper to ensure its solid academic work e.g. follow the chain of citations to see if other independent researchers have verified the algorithm, is it published in a respected journal (generally better algorithms are as they will come under more scrutiny), is there a statistical analysis of the performance of the algorithm?
  • Look for existing implementations or libraries that can speed up development.
  • Prototype in high level and optimise later.
  • Know the definitions – “(i) avoid assumptions about words, and whenever in doubt look up the word in the context of the domain the publication was written, and (ii) write a glossary on a piece of paper of all the concepts and vocabulary specific to the publication that you did not know before.”

How to implement an algorithm from a scientific paper

Forum for AI Research Students (FAIRS 15) workshop at UoC

Last month I attended the Forum for AI Research Students (FAIRS) at Peterhouse college. The workshop encourages discussions on your research and problems you’ve encountered as a researcher. There was also an informative talk from Professor Max Bramer on what to expect in your viva from an external examiner’s perspective.

It was great to attend and useful for networking, feedback and discussions on your research. It was also nice to bump into my final year project supervisor, Professor Lars Nolle, who was invaluable help on my autonomous drone project!

(more info) http://www.bcs-sgai.org/fairs2015/html/further_info.html
Peterhouse college

Nick Bostrom: What happens when our computers get smarter than we are?

Here’s a solid talk from Nick Bostrom on how negligence in the design of super intelligent AI could potentially have adverse effects on humanity.

From the talk I specifically like this sentence, “If you create a powerful optimization process to maximise objective x, you better make sure your definition of x incorporates everything you care about.”, as setting off an intelligent AI to solve a problem, it would need to take into account all variables that align with the values of humanity.

This paperclip maximizer thought experiment from Nick helps illustrate the dangers of complacency albeit in a humorous manner.
http://wiki.lesswrong.com/wiki/Paperclip_maximizer

EDIT:

I should point out Nick’s talk and this post isn’t about deterring us from the goal of creating super intelligence; but instead highlighting the threats that would likely be faced if we implement intelligent AI without sufficient considerations for our safety.

Start of PhD in machine learning and UoM SpiNNaker workshop

It has been a while since I last posted, I have started a PhD at Nottingham Trent University in machine learning with the specific goal of classifying actions for stroke rehabilitation. I will do a more in-depth post on this once I have finished coding my model.

A few months ago I attended the SpiNNaker workshop at the University of Manchester. They have developed an impressive parallel architecture capable of simulating large numbers of neurons. The architecture maintains flexibility to develop your own neuron activation functions and learning dynamics. Currently simulation software can be developed in Python or C. Also another interesting note is that SpiNNaker is used by the Human Brain Project

They also mentioned looking at releasing a cloud based platform for running your simulations using their huge array of boards networked together  (see image)

spinnaker

SpiNNaker cabinet – 48 chips per board , 24 boards per rack, 5 racks per cabinet

http://apt.cs.manchester.ac.uk/projects/SpiNNaker/project/
https://www.humanbrainproject.eu/en_GB/neuromorphic-computing-platform1

Machine Learning – Microsoft’s object classification system (Project Adam)

So Microsoft have claimed to have produced an object classification system that has double the accuracy with 30 times fewer machines (although it may have taken longer to train) than other state-of-the-art systems. I presume they are basing this off Google and Stanford’s attempt which was published here  http://arxiv.org/pdf/1112.6209.pdf. Google’s implementation achieved “15.8% accuracy in recognizing 22,000 object categories from ImageNet”. Which means Microsoft’s attempt should achieve at-least 31.6% as they have used the same dataset. We’l have to wait for the publication of their paper which is in review to see the real results and find out if it is more than just hyperbole.

 

Full article here:

http://research.microsoft.com/en-us/news/features/dnnvision-071414.aspx

A Star Search version 1

After a few days I’ve got my A Star pathfinding algorithm working. This search method is guaranteed to find the optimal path to the goal node. The cost of each node is calculated using the following algorithm:

f=g+h where f = total cost of the node; g=cost from start node to current node; h=estimated cost to goal node;

Nodes with a lower f cost are expanded.

The h value is calculated by getting the distance between two hexagons, to achieve this the coordinate system needs to offset the y axis but keep the x axis across. Then the following function will get the distance to the goal hexagon, with each hexagon travelled adding 1 to the distance.

int dx = hexB.posX – hexA.posX;
int dy = hexB.posY – hexA.posY;

if (Sign(dx) != Sign(dy))
return abs(dx) + abs(dy);
else
return std::max(abs(dx), abs(dy));

Key:
S=start node
F=finish(goal) node
Black=wall
Blue=evaluated node
Green=node on optimal path

 

ARDrone Framework Release

The download link at the bottom contains the code for an ARDrone framework that works on Windows. The SDK has been modified to support multiple drones and autonomous navigation. A Qt GUI is provided. Some other libraries are required, details can be found at https://projects.ardrone.org/

Currently the video stream only works on Ar.Drone 1.0 but the Ar.Drone 2.0 needs different decoders to render. This library contains the decoders required http://www.ffmpeg.org/

http://www.filefactory.com/file/1ab7of28mdp3/n/ARDrone_zip

Multiple Drones

I’ve been working on getting multiple drones flying from one computer the last few days and finally managed it. To achieve this with one wifi card I changed the way the drones communicate with the computer by creating an ad hoc network. This type of network means that communication is direct to the devices(nodes) so there is no central modem. To setup the drones for an ad hoc type network I followed this tutorial (link at bottom of page). Once the ad hoc network was complete I went about modifying the 1.8 sdk to send and receive packets from/to multiple IPs(drones). This setup allows multiple drones to be controlled autonomously or manually from one computer.

Here’s the result:

drone ad hoc tutorial:
http://drones.johnback.us/blog/2013/02/03/programming-multiple-parrot-a-dot-r-drones-on-one-network-with-node-dot-js/

Drone Update

So after a few flight tests the drone has decided to break. Fortunately the university has ordered another two drones (2.0 versions), this will allow me to attempt to get multiple drones to connect to one computer. I will need to rewrite some code to remove static functions and loop the code that initialises the wifi connection. Due to the design of the AR.Drones each drone acts as a wireless access point and therefore multiple wifi adapters are required to connect to the drones.