top of page

ORIGINAL VIDEO :

REI GOFFER: Morning, everyone. My name is Rei Goffer. I'm one of the co-founders of ClimaCell. Originally from Israel. Spent 10 years in the Israeli Air Force flying F-16s. Moved here for school, and started ClimaCell about three years ago while studying here in this building.

So today, we're really the global leaders in what we call Micro weather, which is the high-resolution weather forecast. We've raised $77 million from investors, really from everywhere from Japan to the US to Israel. You can see some of the names here on the board. And really, we serve the most weather-sensitive companies and organizations in the world today who are looking for innovation. About 100 employees in three offices, the US and Israel.

So weather forecasts still aren't really great. I guess you all can identify with that. And so the question is, why that is the case. And it really starts with the fact that we don't have enough data on what's going on in the atmosphere at any point in time. That image on the left shows New York City, which is one of the wealthiest cities in the world, with only three high-quality water stations in the entire city.

But I'm sure, as any of you have experienced, there are many, many microclimates within such a large city, and having just three sensors to capture all of that, is definitely not enough. So that's one part of the problem. And if you think about other cities in the world that are less wealthy, obviously, there is even far less data than that.

The other part of the problem is that the models we run, the forecasting models, are very, very coarse. And the reason is that they're run by governments, and governments cannot really hone in on every individual, every independent business, and they have to serve everyone. And so just because of that limitation, the resolution and the accuracy of these forecasts isn't really great. This is the global model that's run by the US on a resolution of 28 kilometers. Not great.

And the result of that-- you know, I'm showing the two extremes here. On one hand, we have thousands of people every year still dying from floods, which is something that we know how to prevent and how to forecast. But we just not do this for 5 billion people around the globe. Exactly what was spoken here before.

And the other side is that the most kind of, high-end usage of weather forecasts, for example, flying drones, is really unsolved today. And this is a quote from Amazon. And they started looking at how they'll actually operate overseas, and they said, well, the weather is not sold. We can't do it. MIT Lincoln actually did a long work on that, and they saw that this is a huge gap today.

So how are we solving it? We basically look at everything in the world and say, this is a weather sensor for us. And we translate data from mobile devices, from wireless signals, from satellites, from airplanes, from IoT devices, into weather information.

[MUSIC PLAYING]

Here is an example from India. This is a weather station on the left side and our sensors on the right. You can see how many more we have. We have products all across the globe from consumer to businesses. And here are some of the partners we have today from again, government, aviation, to large energy companies.

If you're interested to work with us, please come reach out.

​

​

BO ZHU: All right. Hi, everyone. I'm Bo from BlinkAI. And we're here to change the way that cameras see the world. So as we all know, cameras are being embedded and deployed everywhere onto a variety of devices and vehicles and multiple-growing markets.

But one of the universal problems is that almost all these cameras are going to be very weakly performing in low-light conditions. And that's because camera sensors themselves are getting smaller and smaller to fit on smaller packages, and also to reduce cost. That means, ultimately less light, and less information comes into the sensor, resulting in poor quality and noisy images. Especially in dark environments where it really matters.

Because these poor quality images are increasingly directing critical computer vision systems with real-life implications. So here's a frame from the onboard video of the Uber self-driving car crash that happened last year. And one of the main reasons for this was simply because the cameras failed to see that there was a pedestrian in the dark causing the object-detecting system to fail.

And so, on the imaging side, what can we do about this? Well, we don't have that many options. I mean, one way is to increase the sensor size, get to larger, more expensive sensors and lenses to capture more light, capture more information. Or you can do more with the information you already have, which is what we do as humans with this amazing process called perceptual learning, whereby our brains are retraining itself constantly in terms of how to best see and interpret the raw neural signals that are coming in. This is part of the reason why biological vision is so efficient.

So our technology, auto AUTOMAP, recapitulate this process with artificial neural networks, dramatically improving the imaging performance of any digital imaging sensor by three- to five-fold. We published the fundamental aspects of this work in Nature last year, and since then have received significant attention from the scientific community, and also in the media.

And here's just a quick demonstration of it. So here's the default JPEG that comes out of Samsung S9+. With the same exact raw data, this is the type of image that we can achieve. Now, in contrast, the things like Google Night Sight, that you might have heard of, which solves this problem by taking multiple frames over a period of two seconds or so just to achieve a one output image. Can't use that for video.

But for blinky eye, we're able to-- actually, our deep-learning solution is able to work on every individual frame in a real-time inference speed. And therefore, we're actually able to do this with low-light video enhancement. So here's a quick example of that. Left and right, left being the traditional ISP algorithm. And on the right, what we can do.

And now, if we just overlay an object detection system, we see that we do far better in terms of object detection performance as well. And here's an example where we can do a single-frameHDR, showing that very similar to the Uber self-driving car scene, you can't see the person on the right side. But while we also haven't been able to do single-frame HDR to be able to get rid of these problems.

[MUSIC PLAYING]

So just, in conclusion, we have a proprietary machine-learning platform to maximally extractimaging data in low-signal environments. Very compatible, both with upstream imaging hardware. So you don't need to change your sensors. And also, compatible with downstream reception, so you don't need to change your algorithms.

Low price computation instead of expensive lenses and sensors. And finally, this is really an important problem in multiple markets, so if this is of interest to you or your organization, please come see me afterward. There's a software solution that can be deployed very easily on all sorts of hardware platforms. So thank you very much.

 

​

DAISY ZHUO: At Interpretable AI, we deliver state-of-the-art analytic solutions that are not black boxes. My name is Daisy Zhuo. I'm a co-founding partner together with MIT Professor, Dimitris Bertsimas, and Dr. Jack Dunn.

Well, we all know about the great promise of self-driving cars. But the unfortunate fact is fatal accidents do happen. You just saw one from the Uber crash last year. So when that happens, how do we know who's at fault? Who should be held accountable? Can society really tolerate not understanding that?

Another example in admissions. Harvard, just down the road, is involved with a lawsuit related to their admissions process. Well, if a student is denied admissions to the school, is it sufficient to just tell them, mom and dad, that an algorithm made that decision, without any explanation or proof that the method is bias-free? Interpretability matters. And for that reason, many regulators, including GDPR, are not mandating all AI systems to be held accountable the same way that human decision makers are.

Interpretable AI addresses this issue head-on. We build interpretable solutions to your hard problems. Our solutions leverage proprietary software modules, each based on years of research from MIT AI.

With these modules as the building block, we now build solutions that achieve state-of-the-art performance but remain completely understandable and fully transparent to everyone at your organization. These modules also cover the entire spectrum of the data analytics process, from data cleaning to predictions to decision making.

The technology has very broad applications with current customers in many industries. Just to name a few, we have worked with major hospitals, including Massachusetts General, to build a very accurate surgical risk predictor that doctors there are using every day. We're working with a cybersecurity company that we delivered a highly-accurate, real-time malware detection algorithm that they can understand the nature of the latest attack.

With major car manufacturing companies, we help them to identify machine failure modes, make much better predictions, and make their predictive maintenance system much more effective. Many other applications in retail, banking, insurance, pharmaceutical, just to name a few. We're looking for additional partners to build and improve the trust in your data-driven decision making. Please come talk to us at lunch. Thank you.

​

​

RYAN DAVIS: I'm Ryan, one of the co-founders of SAIL. We are a company our of the MediaLab, as well as CSAIL. And we've developed a data privacy platform to help your companies get access to the big data and AI solutions that you need. We're very excited, because we'resolving a very fundamental problem in business, getting access to new information to make better business decisions.

Now, getting access to this information, and being able to use this data should be a simple process. But for anyone that's tried to do this, you understand that getting access to data can be 90% of the process. You have to go through a long line of people-- IT, legal, management, and budgets-- in order to get access to the data you need. And there should be an easier way.

The problem is, nobody trusts data sharing. But in working with our professors and our advisors at the Media Lab, as well as at CSAIL, we found a better way to put, basically, a protective layer around data and code to protect it during use. This new way of protecting data is revolutionary. We're able to protect information so that your company can access the data it needs without owning it or to take your data and create new revenue from it without sharing it. And this is pretty radical.

In fact, when our advisors go out and tell industries and experts about this, the reaction speaks for itself. The CTO of the Department of Health and Human Services said, holy shit. Holy shit. This has huge implications for healthcare. And we couldn't agree more.

We're working across healthcare right now with drug companies to basically digitize their preclinical licensing and drug discovery process. We're also helping these types of companies access large databases of private information to discover new biomarkers and make new discoveries. We're also using our technology as a secure layer or otherwise referred to as a secure docker, to deploy algorithms out to customer networks, while protecting privacy and security.

We're most excited right now because we want to talk to you. And we want to hear about how you've tried to share and collaborate with data and algorithms. We're building out a privacy consortia across companies and pharma, healthcare, insurance, as well as for analytics and consulting companies, to bring people together to be able to develop new business and make new discoveries, but without having to share any data.

So please come find us online at SecureAILabs.com, and we look forward to hearing from you. Thank you.

 

​

NATHAN WILSON: So artificial intelligence is reaching the point now where it can solve not just perceptual problems, like may helping us see better and helping us understand speech, but also cognitive problems, like supporting us in the field for our decisions and helping us service the right information at the right time to our people.

And around MIT, people have been codifying enough of the blueprint that we can start to put this into systems and use it. In the past decade, it's really started to come out at this point in history. So we at Nara Logics are encapsulating this into a system that we call SynapticIntelligence, and we're deploying it into larger organizations in production.

Everybody here in this room who manages their organizations and drives digital transformation knows that this actually exacerbates the problem of too much information. This is a growing problem. As we digitize, there's more information, and meanwhile, human bandwidth is shrinking. So we want to flip that problem on its head and make it so that when we get more data, it helps us actually route the information to the right person to deal with the bandwidth problem.

And so we see systems where, when we power things for our customers, it helps them by putting the right products to the customers at the right time, and also, empowering the employees so they can find what they need at the right time. And so we do this by providing answers AI and plugging into your systems to make this possible. Specific examples, Rebecca talked about the implementation of Procter & Gamble. This helped them deploy in a couple of of weeks. Sat in the middle. They had their interface.

An app, Olay Skin Advisor-- you can download it right now. It takes a picture of people's faces, and through those factors, tells them which product at P&G should be used by them. And it does this in real time with explainability. A similar problem, which doesn't sounds the same, but it's really the same problem-- for intelligence, we have systems in production right now on pre-running in the agencies that are helping them find, through the world, the news being serviced every day which information should be serviced to which analyst at which time.

We do this again, just totally plugging into your data. We take your data coming in. We can augment it if you want. And we sit on top of other AI systems generating data. And we put our cube in the middle in front of your interface, which is then going to let you show things in the way that you want and service the right thing to the right person at the right time. We have proven results across industries that are better. We have the explainability. And it's really easy to plug into systems, whether it's in the cloud or on-prem.

So if anyone out there is interested in digital transformation or wants to support the flip side of using the data to make better decisions, and you're doing a build strategy and you really want to do it right-- you want to break down the data silos, of course. We're interested in finding you if you're interested in moving beyond pilots to actual production because we know how to guide it there. And we love to work with people who have that forward vision.

If you're in any of the industries where we've found fertile soil that this works, please feel free to reach out to us. And definitely come see us out at the table at lunch. Thank you.

 

​

DAN STURTEVANT: Hi. My name is Dan Sturtevant. I'm at Silver Thread. And I just hit the button wrong. Let's see. Back. Back. OK.

So if you are in a large software enterprise, many of the projects that you have are at risk. Right? Some are on time. Some are delayed. Some fail. And all of the extra time that you spend on these software projects is money and resources wasted.

We have been working with customers who feel that our technology is capable of helping them do things faster so that they can get more done. Now, the way that we do this is we've developed technology over the course of the past 15 years out of MIT and Harvard to assess the technical health of software systems, from both a code quality standpoint, which is what people traditionally look at, and also, using graph theory to reason about the architecture of health of the system.

Is it modular? Does it have tight APIs? Good layering? Reuse? And so on. Motherhood and apple pie of software architecture.

We've also done a lot of studies showing that there's a strong economic impact of that architectural health. In this system, where architectural health was degraded, a developer was only able to produce 8,000 lines of code or 8 features per year.

And they were spending 69% of their time fixing bugs. In this healthy system that we studied, that same developer was able to produce 20 features in a year, and only spend 20% of their time fixing bugs. So clear economic and risk-related differences. And we've done a lot of studies like this over the course of that time, and now we're commercializing this technology.

We've been working with the United States Air Force, for example, and they said that we gave them a greater than 20x ROI on the activities that we did with them. We were able to assess their portfolio of software applications. This is 100 major systems in the United States Air Force. Some of them were healthy. And in the healthy ones, we could actually reward those teams for doing well.

Some were challenged. And the ones that were challenged, we used our technology to help them fix those systems so that they could improve and reinforce the architectural boundaries in their system, and actually measure the ROI of the fixes that they did. So they can measure their economies before and after, and know that they got to say, $10 million back for every million dollars they invested in the improvements of this system.

And then finally, there have been several programs where we assess the health of those systems. Realize that refactoring was not feasible in the system anymore or the economics of it weren't worthwhile. So they recapitalized several programs because of that.

But imagine if you're a software executive with a portfolio of 100 or 1000 systems out there, if you could make these decisions more effectively, there's a lot of money to be saved. And if you could fix the systems where refactoring was a rational option, being able to build the business case for it is critically important, because it's very hard to do.

In conclusion, if you'd be interested in working with us, we have some ways of doing pilots. First of all, we would love to be able to do diagnostics on some of the systems that you have to assess the architectural health and benchmark it. And then we have the tooling to help your engineers fix these systems. And then we'd also like to be able to run diagnostics across your entire portfolio to help you target where you want to attack with these diagnostics. So thank you.

© 2019 by Multimedia Report

bottom of page