On April 10th and 11th, 2018, Facebook CEO Mark Zuckerberg was questioned for almost 10 hours by senators and representatives before joint Senate Judiciary and Commerce Committees regarding the company’s privacy policies and use of data collected on the company’s social media web site. The most used phrase in Mr. Zuckerberg’s responses was AI (artificial intelligence).
Many of the questions were understandably difficult to answer. For example, when will Facebook disallow and remove all postings which offer to sell opioids, which are hateful, or encourage terrorist acts? Mr. Zuckerberg acknowledged developing and deploying AI to disallow and remove “bad” content is difficult, will take significant technical development, and will require a lot of time. An AI solution may be even more difficult than he thinks.
Illicit drug dealers may invent new abbreviations or pseudo names to describe illegal drugs as fast or faster than AI can spot them. It is easy for most humans to tell the difference between “I hate spinach” or “I hate this frigid weather” and a statement about hating a particular person or group of persons. A recent experience on Facebook made the challenges of AI very personal to me.
On February 9, 2018, I posted a story titled “Corrupt Congress” in my blog at johnpatrick.com.[i] I use a social media tool called Buffer to automatically share all of my posts on a number of social media sites including Facebook, Twitter, LinkedIn, Google+, Medium, and GotChosen. The story was about how the pharmaceutical industry lobby spends hundreds of millions of dollars to support American politicians who vote favorably on programs which protect the status quo of high drug prices. I wrote about this in considerable detail in Health Attitude: Unraveling and Solving the Complexities of Healthcare.[ii]
A couple of days later, I received a support message from Facebook on my iPhone. The message said, “We removed this post because it looks like spam to us. If you did post this and don’t believe it’s spam, you can let us know.” I responded by clicking on “This is not spam”. A couple of days later, I got another support message. It said, “Thanks for letting us know about this post. We’ll try to take another look to check if it goes against our Community Standards and send you a message here in your Support Inbox if we have an update.” A couple more days went by and then I got a final support message saying, “Thanks again for letting us know about this post. We took another look and found it doesn’t go against our Community Standards, so we’ve restored your post. We’re sorry for the trouble and appreciate you taking the time to get in touch with us so that we could correct this.”
Over the course of the following few weeks, I received more than a dozen support messages about additional posts, all of which were taken down by Facebook. All said, ““We removed this post because it looks like spam to us.” The keyword in the message is “us”. Who is “us”? It turned out the “us” was an AI. The AI read the title of the “Corrupt Congress” post and, apparently, interpreted the word “corrupt” as a verb. I had written it as an adjective. In other words, I was describing Congress, but the AI thought I wanted to change Congress and by corrupting it. One word, two uses. Perhaps if I had titled the story as, “A Corrupt Congress”, the AI would have seen the word “corrupt” as an adjective. This simple example illustrates why AI has a long way to go to equal the humans who looked at my post after I attested it was not spam.
I opened a technical support ticket with Facebook to look into the problem further. They acknowledged the “corrupt” post was flagged by the AI, which in turn, flagged me as a bad person. They had put a block on my account which prevented me from uploading any stories, pictures, or videos. The support person checked me out, apologized for Facebook, and assured me I would never be blocked from uploading content in the future.
[ii] Health Attitude: Unraveling and Solving the Complexities of Healthcare (Palm Coast, FL: Attitude LLC, 2015).