top of page
Search
  • -------

BlinkAI: Imaging AI for autonomy, robotics, sensing

Updated: Sep 10, 2019



BO ZHU: All right. Hi, everyone. I'm Bo from BlinkAI. And we're here to change the way that cameras see the world. So as we all know, cameras are being embedded and deployed everywhere onto a variety of devices and vehicles and multiple-growing markets.

But one of the universal problems is that almost all these cameras are going to be very weakly performing in low-light conditions. And that's because camera sensors themselves are getting smaller and smaller to fit on smaller packages, and also to reduce cost. That means, ultimately less light, and less information comes into the sensor, resulting in poor quality and noisy images. Especially in dark environments where it really matters.



Because these poor quality images are increasingly directing critical computer vision systems with real-life implications. So here's a frame from the onboard video of the Uber self-driving car crash that happened last year. And one of the main reasons for this was simply because the cameras failed to see that there was a pedestrian in the dark causing the object-detecting system to fail.

And so, on the imaging side, what can we do about this? Well, we don't have that many options. I mean, one way is to increase the sensor size, get to larger, more expensive sensors and lenses to capture more light, capture more information. Or you can do more with the information you already have, which is what we do as humans with this amazing process called perceptual learning, whereby our brains are retraining itself constantly in terms of how to best see and interpret the raw neural signals that are coming in. This is part of the reason why biological vision is so efficient.

So our technology, auto AUTOMAP, recapitulate this process with artificial neural networks, dramatically improving the imaging performance of any digital imaging sensor by three- to five-fold. We published the fundamental aspects of this work in Nature last year, and since then have received significant attention from the scientific community, and also in the media.

And here's just a quick demonstration of it. So here's the default JPEG that comes out of Samsung S9+. With the same exact raw data, this is the type of image that we can achieve. Now, in contrast, the things like Google Night Sight, that you might have heard of, which solves this problem by taking multiple frames over a period of two seconds or so just to achieve a one output image. Can't use that for video.

But for blinky eye, we're able to-- actually, our deep-learning solution is able to work on every individual frame in a real-time inference speed. And therefore, we're actually able to do this with low-light video enhancement. So here's a quick example of that. Left and right, left being the traditional ISP algorithm. And on the right, what we can do.

And now, if we just overlay an object detection system, we see that we do far better in terms of object detection performance as well. And here's an example where we can do a single-frameHDR, showing that very similar to the Uber self-driving car scene, you can't see the person on the right side. But while we also haven't been able to do single-frame HDR to be able to get rid of these problems.

[MUSIC PLAYING]

So just, in conclusion, we have a proprietary machine-learning platform to maximally extractimaging data in low-signal environments. Very compatible, both with upstream imaging hardware. So you don't need to change your sensors. And also, compatible with downstream reception, so you don't need to change your algorithms.

Low price computation instead of expensive lenses and sensors. And finally, this is really an important problem in multiple markets, so if this is of interest to you or your organization, please come see me afterward. There's a software solution that can be deployed very easily on all sorts of hardware platforms. So thank you very much.

4 views0 comments
bottom of page