SmartSign Blog

MIT is developing smarter maps

Google Street View is already somewhat of a revelation. The ability to see 360-degree images of millions of locations around the world is something we couldn’t have even imagined a generation ago.

And yet, being able to look up images via address is just the beginning. Researchers at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT) have taken things to a whole new level with a “deep learning” project that aims to add impressively nuanced layers of context and detail to maps.

The algorithm can predict wear the nearest McDonalds’ is based on an image from Google Street View.

For their project, the MIT researchers selected eight cities around the world (Boston, Chicago, Hong Kong, London, Los Angeles, New York, Paris, and San Francisco). Then they split the cities into hundreds of thousands of individual locations and used Google street view to compile the views facing North, South, East, and West from each location. In all, they ended up with 8 million Google street view images. They fed these images into an algorithm.

The algorithm produced two primary functions. First, researchers plotted locations of McDonald’s restaurants in all eight cities and used the McDonald’s as reference points. They pitted human users against their algorithm to see whether humans or technology were better at guessing proximity to (and direction toward) the nearest McDonald’s based on visual cues in the images.

Luke Dormehl of Fast Company‘s Co.Exist recently wrote a feature about the algorithm. Dormehl reports that, “While humans proved better at navigating to their nearest McDonald’s in the fewest possible steps, the algorithm consistently outperformed people when being shown two photos and answering which scene takes you closer to a Big Mac.”

The algorithm uses cues such as prevalence of taxi cabs to guess that the image is from a densely populated area, and cues such as oceans to guess that the image is from the outskirts of an area.

While the idea of using visual cues to detect proximity to a McDonald’s is interesting, we think the algorithm’s second function is much more compelling. The MIT researchers layered their image-centric maps with “aggregated crime data” in order to construct crime density maps.

To us, these crime density maps are the real story. They’re part of the movement toward “context aware” maps, and the list of potential applications is practically endless.

Case in point: Traveling to an unfamiliar city on business and need to grab a late night bite to eat? Use a context aware map to find not only a restaurant, but a restaurant in a safe area.

Shopping for a new home for you and your young children? Make sure that both the home—and the assigned local elementary school—are in safe areas.

Plotting a bike route to work? Find one with low traffic, low crime, and a good coffee shop along the way.

The MIT algorithm is smart enough to apply its knowledge to maps from other cities. (For example, proximity to an ocean suggests similar things in both Los Angeles and Atlantic City.) Aditya Khosla, a fourth-year computer science PhD student who worked on the MIT project, is aware of the extensive possibilities. He says that they simply need more data to improve accuracy and expand applications.

Exit mobile version