The first photo uploaded to the Internet was a shot of a totally obscure band. Back in 1992, the inventor of what was then called the World Wide Web posted a pixelated shot of Les Horrible Cernettes, a female comedy band associated with the CERN laboratory in Geneva.

And so it began.

Since then, we’ve seen the invention of smartphones and the emergence of social networks. The internet is now packed with vacation snaps, eye-rolling gifs, photos of people’s kids, memes of cute dogs, over-posed selfies and more photos of people’s kids.

Deloitte estimated that more than 3.5 million photos would be shared every minute in 2016, or about 5 billion images a day. That’s a heck of a lot of information—way too much for human eyes to sift through. And only a fraction of it is tagged, captioned or hashtagged in a way that would allow traditional technology to extract useful information.

But there is a way to mine all that visual data: deep learning.

gumgum body1

A form of artificial intelligence (AI) that uses algorithms to mimic some of the functionality of the human brain, deep learning enables software to “see” images and recognize what’s in them. Deep learning relies on artificial neural networks—mathematical functions layered together in a special way—that get smarter as they process more information.

To train a deep learning system, experts feed it large amounts of labeled images—say, pictures of cats and dogs. Deep learning perceives different features of the images, like colors, curves, textures, and tries to identify the animal in each one. As it gets feedback on its answers, the network tweaks its internal rules for recognizing a cat or dog. Over time, it gets better and better—or, rather, it learns.

So what does that mean for business? Maybe not much when it comes to cats and dogs. But the visual data landscape is changing the way marketers effectively serve image ads, measure the value of sports sponsorships, pinpoint the right social media influencers for a brand, and more. Here are just four ways deep learning is positively impacting marketing in an increasingly visually-driven world.

1. Factoring in Context

AI has been widely used for audience segmentation in advertising, but is still a relative newcomer to contextualization. Deep learning is able to interpret the images that accompany content, then serve ads that relate directly to keywords for their images. Brands get to target an audience that has shown interest in a related topic, and create a polished and pleasant experience for the viewer.

Say someone is reading an article about the Chevy Bolt on a news site. AI technology can extract the make, model, color and brand of the car; combining that with information gleaned from surrounding text—ensuring that it’s a positive story, for instance—it can offer up a targeted ad for the Hyundai Ioniq. Now that’s driving results.

2. Smarter Sports Sponsorship Measurement

Television used to be the only game in town for professional sports. Now you have fan accounts on YouTube, Twitter, Facebook and other platforms, clips of games being shared globally, in multiple languages, by people of any age.

These platforms are increasingly powerful—and difficult to keep tabs on, especially since much of the posting is taking place outside of official accounts.

Deep learning rises to the challenge. GumGum’s computer vision technology can process sports videos posted across major social media channels. It derives a complete and accurate picture of the value of sports sponsorships, by looking beyond the size and quality of the audience. The engine not only locates logos but also tracks their location, size, visibility, voice share, and on-screen time and calculates a specific dollar value accordingly.

Last year, for instance, we analyzed the value of social media for sponsors of the US Open tennis semifinals and finals. We found that Instagram accounted for more than a third of that value, and non-owned accounts contributed nearly half of the total value. With other measuring systems, that insight would be out of bounds.

3. Capturing the Whole Social Picture

What’s a hashtag? Don’t ask the millions of social media users who post images without them, making it nearly impossible for brands to find their products in the tsunami of pictures.

Visual computing helps brands tap the value in those images, extracting data that may not be included in the post text. Our engines have the ability to process all of those images, scanning them to identify logos and products—a photo of a tall Pumpkin Spice Latte in a tweet that doesn’t mention Starbucks, for instance—then overlaying data about demographics, sentiment and affinity.

With that intelligence, brands can identify their top social media influencers, track the competition and identify viral content as soon as it begins to get traction. Suddenly, social strategies are smarter, better targeted and more cost-effective.

4. Maintaining Brand Safety

The difference between a nude photo and a bikini shot might just be a few square inches of fabric—but when it comes to brand safety, that can make a world of difference.

In recent years the online risks to advertisers have soared. Sadly, the internet is awash with racist memes, violent videos, depictions of drugs and alcohol, and sexually explicit images. It would be impossibly complex and time-consuming to have a human being check the appropriateness of every image a brand might be advertising against. But deep learning is sophisticated enough to distinguish between a swimsuit shot and too-sexy photo, making sure everyone’s rear end is covered.

Conclusion

Visual data is a newcomer to the marketing toolbox, but it will play a bigger and more crucial role in data science as we find more ways to leverage its abilities—and as the internet is flooded with an increasing number of images.

Of course, visual computing systems are only helpful if marketers use them wisely. Combining artificial intelligence with real know-how: that’s genius.

Vaibhav Puranik
Vaibhav Puranik

Vaibhav Puranik has been working in the field of Big Data for more than 8 years. At GumGum, he oversees the Data Engineering and Data Science teams among other responsibilities. Previously, he has worked at various LA internet companies and the Johnson Space Center (NASA) in Houston. Vaibhav has a master's degree in Computer Science.