That’s no carrot!

Billions of pounds are being lost at self-checkouts by the use of this subtle trick — but is there a clever way that computer vision and machine learning can help to stop it? We’ve created a prototype using AWS Panorama to try to come up with an answer.

NCR Corporation model of self-service checkouts and fast-lane at a Sainsbury’s store. (Creative Commons)

Did you know that 1 in 4 people have admitted to taking an item from a self-checkout without paying for it? Here’s one trick: you put down a more expensive product on the scales and choose “carrots”. Because you have put something on the sales, the self-checkout naively believes that whatever you put down (wine? A PlayStation?) is a carrot and allows you to put it into the bagging area and get the more expensive product for cheap. Score! Also, theft.

Because it is so easy and so many people are doing it, it’s costing retailers an arm and a leg (which, if weighed by the kilo, would probably be more expensive than carrots!). Retailers don’t quite know how much they are losing out here because it’s a hidden factor and lumped together on the balance sheets with a bunch of other invisible leakages. But it is big — according to one estimate, over 3 billion a year in the UK in 2017 was lost to self-checkout fraud.

So thinking about this issue we thought it would be interesting to see if we could build a solution to this problem, something we called the “carrot / no-carrot” system.

It uses cameras and the latest computer vision and artificial intelligence to identify whether people are actually putting carrots (or broccoli, or oranges) on the scales. If it is not, we very gently inform our customer that our humble system was not able to see the carrots that they have claimed are there and suggest they get help or start again. How simple is that! With one small flick of our wrists and a bit of hard work, we do away with a big source of money loss for our retail clients.

A schematic of how our system works.

In order to get there we had to solve a bunch of problems in software engineering, computer vision, remote development, and machine learning but in the end it only took two of us two weeks to build this prototype. It helped that we had access to the preview addition of the AWS Panorama developer kit — a hardened minicomputer with the latest neural acceleration from NVIDIA designed for computer vision applications and set up to connect directly to the AWS cloud. It made life very easy for us and we built our demo around this cute little machine. But it is also because we just love doing this kind of thing so much that we stopped attending meetings that we should have been attending in order to get it done.

Here’s a screenshot of the user interface showing the prototype doing its thing.

This is just a starting point, but it is the lowest-hanging fruit (because carrots are fruits, QED) and allows us to start introducing barriers to theft, one by one, even product by product, without affecting the checkout flow with which customers are acquainted. We also get to finally measure at a much higher level of granularity just how that money is being lost at self-checkout.

You can see it in action here:

More about what we do:

Back to Blog