DeepRay AI technology repairs distorted and obscured images in real-time

  

DeepRay’s ability to see clearly in difficult, unpredictable situations could help to transform numerous machine vision and imaging applications, from autonomous driving to empowering healthcare professionals with more accurate medical imaging.

While machine vision systems have progressed rapidly their performance can deteriorate if a view is obscured by rain, smoke, dirt or other obstructions bringing with it serious implications for real-world applications..

DeepRay learns what real-world scenes and objects look like and also how they appear with various image distortions applied. When presented with a distorted image it has never seen before, the technology can form a real-time judgement of the ‘true’ scene behind the distortion. Having this “mind’s eye” ability means that DeepRay will outperform humans and existing machine vision approaches in reconstructing clear images under difficult conditions.

Commenting Tim Ensor, Commercial Director for Artificial Intelligence at Cambridge Consultants said: “This is the first time that a new technology has enabled machines to interpret real-world scenes the way humans can – and DeepRay can potentially outperform the human eye. This takes us into a new era of image sensing and will give flight to applications in many industries, including automotive, agritech and healthcare.

"The ability to construct a clear view of the world from live video, in the presence of continually changing distortion such as rain, mist or smoke, is transformational. We’re excited to be at the leading edge of developments in AI. DeepRay shows us making the leap from the art of the possible, to delivering breakthrough innovation with significant impact on our client’s businesses.”

DeepRay is the latest technology to emerge from Cambridge Consultants’ Digital Greenhouse, an experimental environment where data scientists and engineers explore and cultivate cutting-edge deep learning techniques.

DeepRay, which will be unveiled at next year's Consumer Electronics Show (CES) in Las Vegas, uses unique extensions of the Generative Adversarial Network (GAN) architecture.

Training requires six neural networks to compete against each in teams, inventing difficult scenes and attempting to remove distortion. Effective end-to-end training of so many networks together has only been possible in the last two years but is yielding radical new capabilities.