Browse By

Robot Reporters—Mightier Than The Pen?

This article was not written by a robot. While this statement might sound strange, it could quite well become a norm for human-written articles in the near future. Thanks to companies like Automated Insights, Narrative Science, and Yseop, Artificial Intelligence is expanding territory with yet another phenomenal application–Journalism.

The term Artificial Intelligence was born at the 1956 Dartmouth Conference, as a field of study under the presumption that “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it“. The decades that followed shaped this field into the embodiment of science fiction we know now.

Breaking into the etymology of the term, the word “intelligence” is of major interest. The transition from understanding only the binary digit to being able to string together whole reports is nothing short of remarkable. One of the major game-changers in this context has been the phenomenon of Natural Language Processing. A field of AI that started with simple language translation, NLP-based software is now successful in speech recognition and natural language understanding and generation.

A simple overview of the process of report generation using an algorithm

Automated journalism takes things a step further. Reaching beyond recognition and interpretation, it also aims to present data in human-readable ways, an application of Natural Language Generation (NLG). An NLG algorithm generates regular and understandable text. The three core stages of an NLG algorithm are Content Determination, which deals with the requirements and the given data, Sentence Planning, which deals with the semantics, and Surface Realisation, which deals with the specifics (such as appropriate words and ordering of sentences) and the presentation. Most NLG systems today employ the concept of machine learning, where the system learns by analysing example datasets, in addition to following explicit rules.

To an end user, these systems have a simple formula—data in, article out. Wordsmith, a software from Automated Insights, automatically generates an article by taking user uploaded data or user API, and a story structure of their choice. Quakebot, an algorithm used by the Los Angeles Times, produces a story everytime the U.S. Geological Survey’s Earthquake Notification Service releases an earthquake alert.

Picture credits: Cognilytica

The advantages of NLG are immediately evident. Robot reporters can obtain information at phenomenally faster speeds than their human counterparts. This was famously demonstrated by Quakebot when it published a story about the 2014 California earthquake on The Los Angeles Times website within three minutes of the disaster.

Automated Insights, a prominent Natural Language Generator, produces 150 to 300-word articles in the same time that it takes journalists to get the information ready. Such efficiency is a godsend for an industry dealing with the dynamic process of news generation. Moreover, it is much cheaper to employ an algorithm in lieu of the many journalists it replaces. In fact, cost efficiency is improved in multiple ways as data collection is also cheaper when automated. The data centricity of these algorithms simplifies the strenuous tasks of analytics and earnings reports as well.

Despite these benefits, it is hard to ignore the limitations and possible risks brought about by these robot reporters. One such ethically-oriented issue is transparency. Companies like Reuters employ both algorithms and human reporters, and there is no means for a reader to make out the species responsible for their news feed. Moreover, it is easier to pinpoint flaws, bias, and miscommunicated data in a human-article, thus making it easier to rectify. If, however, the algorithm is biased, it is nearly impossible to prove. With digital footprints already being manipulated for feeds and suggestions, it would be quite easy to introduce something similar to these algorithms. The suspected role of Cambridge Analytica influencing the 2016 US elections proves that using selective reporting to influence the public is not implausible.

Looking at the issue from another angle, the question of individuality arises. However ‘intelligent’ these algorithms become, they still cannot reproduce the subtle nuances of a human hand. The ability to add a personal touch is a powerful human trait which is unlikely to ever appear in a purely logic-based robot reporter.

Further, job security will undoubtedly be at risk, as employers look to AI as a replacement for human resources. Replacing journalists is currently being played off as an advantage, with some arguing that it frees journalists from routine reporting, providing them with more time for complex tasks. With the efficiency of general reports already at a high, this makes for growth in all facets of journalism. At the same time, however, it is highly likely that only a select few will be involved and the rest would be left scrounging for jobs.

A comparison of articles written by journalists and software (Picture credits: Thomas Baekdal)

A minor but interesting concern that arises is the question of authorship. Should the programmer claim authorship over the piece or should it be the media body? On the one hand, it doesn’t seem fair to give credits to a programmer who has already been paid and has no hand in the news production itself (Does a photographer ever credit the camera company?). On the other hand, the media body does nothing beyond telling the algorithm the context of the report.

At present, legal and ethical concerns regarding the acquiring of data to train these algorithms hinder AI’s growth in this industry. However, in the midst of all this, to the delight of the enthusiastic and the chagrin of the dubious, AI continues its rampage. AI has now broached every field imaginable, including music composition and psychology, further reinstating that the future of AI can be debated and speculated upon, but never foretold.

For now, however, a human hand signs this off.

Featured Image: Marek Minor on Medium

MIT and ElectionsRead Now
+ +