Artificial intelligence is a growing trend in online publishing, with a future that generates more questions than answers.
Editors, journos: it’s time to start advocating for unconditional basic income. AI is coming for your jobs, and the tipping point could be nearer than most people expect.
Wikipedia’s new AI is part of what amounts to an automated frontline of editorial overwatch. The Objective Revision Evaluation Service (ORES) keeps an eye on new edits to existing articles in a bid to reduce trolling/spamming across the online encyclopedia’s pages. In theory, ORES can distinguish between unintentional errors and more malicious, intentional changes in the text.
The process isn’t wholly automated and serves more as an editorial tool, flagging articles for human review when something seems amiss. So while it’s an improvement over the existing framework, challenges remain. The Conversation nicely summarizes ORES and the hurdles it still must overcome:
“Understanding language might also mean being able to process text in ways humans do….This is what Wikipedia’s robo-editors are doing, classifying edits into the real and unreal, correct and incorrect, acceptable and unacceptable.
To do any of these tasks properly, an AI must learn how to assign meaning to symbols such as words and phrases. This is a very difficult task, not least because we’re not even sure how humans do it….”
Based on that interpretation, it may be too soon to say that traditional writers and editors are truly threatened, but it remains a step in that direction. And AI difficulties aside, programs that are based on natural language processing are starting to see more mainstream integration.
X.ai is one example of personalized artificial intelligence that helps with every day life. It serves as a personal scheduling assistant, allowing you to coordinate meetings over email in a similar manner to which someone would treat a human assistant. Tell Amy (X.ai’s name for its assistant) your general preferences, and she’ll work to schedule meeting requests where and when it’s convenient for you. It’s another instance of humans needing to provide the knowledge framework, but the AI operates surprisingly well once it’s set on minimal rails.
But underneath this language learning progress is also an editorial arms race that pits AI against AI. An experiment conducted by Dirk Hovy, out of the Center for Language Technology at the University of Copenhagen, pitted software that detected bogus reviews against software that created fake reviews (credit to Reddit user leondz for posting the study).
The preliminary results showed that the detection software, when provided a large amount of content as data, was able to identify manufactured content to a significant degree. However, software that created the artificial reviews, when provided with similar data, was able to write posts that the editorial AI then had difficulty identifying. In all, the two forces threaten to counteract each other when provided with the same data to work from.
While humans remain the current (and fallible) arbiters of what’s truly spam, those days may be numbered. Who knows what Alphabet/Google has tucked away in its laboratories.
For added robot overlord-related entertainment: “27”