Image recognition
Face identification and auto-tagging are only scratching the surface of Facebook’s machine learning capabilities. They’ve actually been using a dataset of 3.5 billion tagged Instagram photos to train their software to identify what’s in a photo, whether it’s a beach (#beachlife!) or a cat (#lolcats). This isn’t just for kicks – not only can it tag and categorize your photos, it can also provide keywords to help describe images to the visually impaired and check for inappropriate/offensive content (even if it does get a bit overzealous sometimes). They’re even building a tool to help figure out human poses, which could become a powerful way to make guesses about user mood and behavior, which could get a bit creepy. But we’ve gotten used to a lot.
Recommendations and sorting
Facebook recommends friends, sure, but the suggestions don’t stop there. It also recommends timeline posts, news, events, groups, pages, products, and more. Most of the content you see on your page is showing up because a machine-learning algorithm decided you would like it and prioritized it for you. This can get fairly political, though, as has been evidenced by a few upsets over fake news, filter bubbles, and general bias centered around major elections.
Content moderation
Though these systems are still very much works in progress, recent events have prompted Facebook to push harder for strong content-filtering systems that can identify fake news and hate speech. They can keep an eye out for links or text that might be propagating false or radical information and remove it. Apparently, these algorithms have been most successful at finding and deleting terrorist propaganda/recruitment content, catching over 99%.
Language
Modern AI is getting pretty good at figuring out what humans are saying. The next step is more to figure out how they’re saying it. Having just acquired Wit.ai, a natural-language processing startup from London, Facebook is looking to upgrade their ability to discern context and meaning more accurately to help them fight things like fake news and hate speech. They’re also working on improving their ability to interact with users in different languages and improving translations. Among other applications, Facebook is actually using AI to determine when someone posts suicidal thoughts on Facebook, contacting their friends and first responders when necessary. By their report, this has already begun to save lives, and it shows how powerful AI can be in a setting where it has access to human psychological data.
Playing games
Games are a great way to test AIs. Drop them into an artificial situation and see how they do versus other computers or humans and how well they can actually learn. Facebook has ELF OpenGo, which is similar to Google’s Alpha Go Zero, as well as the broader ELF project, which provides a platform for AI game research. They’ve even developed an AI platform to help conduct research on AIs playing StarCraft.
Research and development
Facebook’s primary AI site doesn’t present you with a bunch of flashy marketing materials about the future. There’s a lot of pretty serious stuff going on, though, as you might infer from the number of projects and teams. They’ve developed tools like PyTorch and (with Microsoft) ONNX, which are open-source contributions to AI research in general. They also join most of the other major AI companies in the Partnership on AI, with the goal of using AI to benefit society and developing it responsibly.
So what is Facebook going to do with all this power?
Your Facebook experience has undoubtedly been improved by AI, and chances are that it will come to improve other parts of your life as well, given that a lot of Facebook’s research is open and can be used by other researchers and developers. But the company does have a tendency to push too far too fast, and AI is another avenue where that could go wrong. If it feels a bit like a sci-fi dystopia to have robots checking up on your behavior, monitoring your psychology, and moderating your interactions, you’re not wrong. Targeted ads are already using guesses about you to sell you things, but what if Facebook begins using machine learning to figure out how to manipulate your mood before showing you an ad? Perhaps a series of posts, colors schemes, or subtle nudges stimulate hunger right before suggesting a pizza delivery? Waiting until your mood is empathetic to promote a charity? It may be more of a real and imminent concern than you think.
So Facebook is going to become Skynet?
Well, there’s already a company named Skynet, so Facebook would have to acquire them first. But then maybe they will take over the human race, not with killer robots, but with gentle nudges. More likely, though, we’ll get some amazingly beneficial things from Facebook’s AI (I still think social media, on balance, has done more good than harm), as well as some things that make us even madder than Cambridge Analytica. Even AI can’t predict the future (yet), so we’ll just have to see which timeline we end up in. Image credit: Game of Go in our club.