More than just pictures
When you think of imagery and Google Maps, you probably think of the Street View cars and trekkers that collect billions of images from all around the world. Today, we’ve captured more than 10 million miles of Street View imagery–a distance that could circle the globe more than 400 times!
Or your thoughts may jump to Google Earth, our platform that lets you browse more than 36 million square miles of high definition satellite images from various providers–covering more than 98% of the entire population–to see the world from above. While these stunning photos show us parts of the world we may never get a chance to visit, they also help Google Maps accurately model a world that is changing each day.
How we collect imagery: cars, trekkers, flocks of sheep and laser beams
Gathering imagery is no small task. It can take anywhere from days to weeks, and requires a fleet of Street View cars, each equipped with nine cameras that capture high-definition imagery from every vantage point possible. These cameras are athermal, meaning that they’re designed to handle extreme temperatures without changing focus so they can function in a range of environments—- from Death Valley during the peak of the summer to the snowy mountains of Nepal in the winter. Each Street View car includes its own photo processing center and lidar sensors that use laser beams to accurately measure distance.
How we process imagery: a vintage technique made new
Once we’ve collected photos, we use a technique called photogrammetry to align and stitch together a single set of images. These images show us critically important details about an area–things like roads, lane markings, buildings and rivers, along with the precise distance between each of these objects. All of this information is gathered without ever needing to set foot in the location itself.
Photogrammetry is not new. While it originated in the early 1900s, Google’s approach is unique in that it utilizes billions of images, similar to putting a giant jigsaw puzzle together that spans the entire globe. By refining our photogrammetry technique over the last 10 years, we’re now able to align imagery from multiple sources–Street View, aerial, and satellite imagery, along with authoritative datasets–with accuracy down to the meter.
How Google Maps uses imagery: (hint - it’s everywhere)
Photos are great, but how are they useful for someone using Google Maps? Well, imagery is woven into every product that Maps provides.
Live View, for example, is a tool that uses augmented reality to show you which way to walk, with large arrows and directions overlaid on top of walking navigation. For Live View to work, Google Maps needs to know two things: where your phone is located, and where this location is relative to the rest of your surroundings. Live View requires orientation precision down to just a few degrees, which simply isn’t possible using traditional tools like GPS signals. Being off by a short distance is fine when you’re driving, but this discrepancy can actually point you in the entirely wrong direction when you’re traveling on foot!
This is where imagery comes in. To see the most precise location possible, Live View uses a new technology invented at Google called global localization that matches up tens of billions of Street View images with what is on your phone to help you identify where you are and which way you should go – all in under half a second!
What’s next
The idea of Street View started as a side project more than 12 years ago as part of a lofty goal to map the entire world. Since then, Street View combined with satellite and aerial imagery has become the foundation of our entire map making process and the reason why we can build useful products that people turn to every single day. Mapmaking is never done–and we’re constantly working to build new tools and techniques to make imagery collection faster, more accurate and safer for everyone.
Join us for our next deep dive in the series to learn more about how we work to create a more useful, up-to-date map.