The Benefits of a Representation-Driven Approach to Data Storytelling | Narrative Science

Blog The Benefits of an Representation-Driven Approach to Data Storytelling

Machine learning (ML) models are getting faster, smarter, and more sophisticated every day. But when it comes to Natural Language Generation (NLG) for data storytelling, ML models still miss the mark. Representation-driven approaches to NLG, such as the approach taken here at Narrative Science, leverage decades of existing linguistics research. This allows them to create language that sounds more natural, is more reliable, and is easier to fix when things go wrong. 

Key Advantages of an Algorithmic Approach to Language

This summer I have been working as an engineering intern at Narrative Science getting to know Lexio, and both my linguistics major self and computer science major self have been impressed with what’s under the hood. While we use ML to complete some of our analysis, Lexio takes a representation-driven approach to generating language. It’s not just a black-box neural net trained to generate language given some set of data. The system understands the relationship between different parts of the input dataset (e.g. knowing that salespeople make sales), structures the relevant data based on this knowledge, and uses well-researched linguistic theory to turn the structured data into easy-to-read text. This representation-driven model offers three key advantages over ML-based approaches:

Reliable Output: Lexio’s approach guarantees the accuracy and grammaticality of the language it writes

Faster Improvement: Representation-driven models are interpretable, so they are faster for engineers to iterate upon, bringing more value to customers faster

Easier Integration: Because we know exactly what the model is doing at all stages, other state-of-the-art tools, including ML-based tools, can be integrated with ease to produce new features out-of-the-box

Now let’s get into exactly how Lexio’s representation-driven model can offer you these advantages.

Interpretable Models: The Key to Good Language

While we don’t know everything there is to know about language, for most things said on a daily basis the underlying structure of what we consider right and wrong has been mapped out by linguists. Modern ML-based NLG systems are also learning to generate good language on their own, but because of the designs of the models it can be difficult to investigate how exactly these models generate language. This can lead to strange and inexplicable language outputs that take too long to debug, and can reduce the value of the system to users.

That said, engineers can build models that already know what structures are OK, which are not, and how to fill in the blanks in these structures with relevant data. These underlying linguistic structures are the basis of representation-driven models and guarantee quality content. At Narrative Science, all language written by our NLG engine follows a representation-driven approach. Any sentence that Lexio writes is based on a linguistic structure that has been specifically chosen to most communicatively present the story told by the data in the sentence. This sentence is guaranteed to be grammatical, and optimized for relevancy and readability given the context in which it is written. Even the most remarkable state-of-the-art ML models can produce text that is usually grammatical, but still struggle with making data make sense in context.

Representation-driven models are also what engineers call interpretable. Interpretable models allow humans to easily look into them, learn what they are doing, and understand how they arrived at the output for a given input. This means that when building new functionality or searching for solutions to bugs, it is easier for engineers to not only understand why the output is wrong, but also what needs to be fixed, making the time to produce solutions and new functionality that much faster. The model that Lexio follows is interpretable because we know the possible sentence structures that are appropriate for a given topic, choosing the optimum one at any given moment, and can edit this list of possibilities as needed. When we go to add new features, instead of requiring retraining of a large, complex ML model, engineers can create new templates and add them to the system with ease. For example, during our Summer Hackathon, I added the ability to talk about hypothetical situations in Lexio in under 3 days. For our project, I gave Lexio the ability to say things like “If average cycle time were 5 days, bookings for the month would be expected to be $10k.” This type of language was never-before-seen in Lexio, and was added by me only 6 weeks into my internship in under 18 hours of work. 

Integrating the State-of-the-Art ML

Now don’t get me wrong – ML certainly has its place in any good NLG company’s toolbelt; The trick is learning when to use it. OpenAI recently released GPT-3, a powerful new language model that performs well at a number of language tasks, including translation, question answering, summarization, and generation. It’s not the leader in any single one of these, but to have a single entity that can do it all relatively well is an impressive feat. But if it’s so good why don’t we toss out everything else and just use this model? The same reason you don’t use a chainsaw to cut steak: it’s an amazing tool, but not well suited to the task at hand.

GPT-3 does really well with text-to-text tasks, like translation. Text goes in, other text comes out. But that’s not what Lexio is for. Lexio is a data-to-text system: we take numerical data and write about it so anyone can easily understand what it means. The two types of tasks require different types of models, and when it comes to structuring information to write about it, an algorithmic approach using linguistic knowledge is best.

So where could ML models be used in a system like Lexio? There could be a number of possible uses for ML in Lexio, but one exciting application is using GPT-3 to deliver “last-mile” features, like translation of text into other languages, or adaptation of language to a certain reading level. With an advanced translation model, there would be no need to program the syntactic rules of other languages into Lexio. We can take our well curated text and translate or transform it using the expertise of GPT-3.

Looking to the Future

This is an amazing era to be both a software engineer and a linguist. The way that computer scientists have been able to take the uniquely human ability to speak a language and teach it to machines is astounding. But my hope is that ML doesn’t reinvent the wheel when it comes to language. Rather, I hope we can stand on the shoulders of linguists to create better, faster, and more reliable NLG platforms, just as we have done here at Narrative Science. 

Follow Along

Subscribe to our newsletter and stay up to date with all things Narrative Science