From manually curated classes to data-classes
The most common use of a class in Python is acting as a data container. Throughout the development of our backend libraries at Narrative Science, we’ve seen several generations of code that attempts to make modeling this data easy.
Finding a simple to use abstraction for these type of classes allows us to focus less on writing mundane classes and more on our business logic.
In this post we will outline the evolution of this code and the pros and drawbacks of each approach.
Generation 1: Basic classes
We have all been here; you write a class, and implement all the dunder methods needed by that class. Simple and straight forward.
- This is very explicit, you can see everything this class does
- Type Checking
Notice all the defensive code needed to handle the cases when we receive arguments with incorrect types. Now imagine this example accepted 20 properties.
More than likely the author of this code just wouldn’t even bother to write this kind of type checking since it’s tedious and time consuming. However not doing so can create complex error conditions in downstream code.
Very commonly, these type of objects directly represent data from the database.
As it is currently written, you would need another translation layer outside the object to do your queries and create the objects as needed.
You will need to write tests to make sure this code correctly rejects invalid inputs.
In order to create an instance of this object, you need to pass every value into the constructor:
It’s worth nothing that it is possible to do some fancy Python magic to create this object out of a dictionary:
But what if I want to dump the properties of this object back to JSON so we could send it over HTTP? You would have to write a method that takes every property and populates a dictionary.
Generation 2: Database linked classes
The idea here is: what if we created objects that were aware of the database?
This essentially creates a class that is now able to push and pull its data from a database and functions like a normal object inside of Python.
The way this DatabaseObject class works is that the properties for the class are determined by the SQL Alchemy table.
DatabaseObject also implements a handful of convenient methods to help getting data in and out of the database such as:
delete. Each will perform the appropriate SQL query.
- We have reduced the amount of tedium from boilerplate code.
- Users have a database aware object without needing to write any SQL of their own.
We still have the issue of having to make sure our types are correct when accepting them as parameters.
- We are unable to set default values for properties, so we must always provide all properties, which is tedious. This is because we’re using SQL alchemy as the source of the property names and don’t have a mechanism for passing other values in.
- What if someone inputs a string longer than 45 characters? We need to write code to check for that.
- There is an assumption that we must know the type in Python that corresponds to the SQLAlchemy type.
Alongside the tests needed for the DatabaseObject class to ensure the database accessor methods work correctly, we will also need to still have tests that ensure that these classes reject invalid inputs and don’t just throw an exception from SQL alchemy.
Generation 3: Along came data-classes
Basically, a data-class is a simple abstraction for classes that store data. It performs automatic type checking, serialization and deserialization, and implements things like
__eq__, and gives a standard
__repr__. This is basically what we wanted from Generation 1.
There have been many libraries before this to implement the data-class such as attrs, marshmallow, and schematics. It is worth noting that Python 3.7 introduced an official the Data Class to the core library as well.
- Automatic type checking
- Allow us to use other data structures like Enums or other models as types
- Ability to set contraints such as string length
- Ability to set default values
- Greatly reduced boilerplate vs Generation 1
- Easily readable
- Integration with code editors intellisense
- No need to write additional tests
You can leverage other Python built-ins to do even more powerful things:
- These models are not database aware like Generation 2.
Generation 4: Adding a database to Pydantic
Building on our idea from before about turning an ordinary class into a database aware object; we combined the benefits of Pydantics input validation and serialization with the SQL Alchemy based database class from Generation 2.
The result is that now we have functional ORM that works with FastAPI!
We add a class decorator that will use the properties of the model as fields in the database. This means that database models will be 1:1 representations of the database itself.
Using Pydantic, we are able to add additional keyword arguments to the Field function.
- Fetch on create / update
We want our database to be the ultimate source of authority, so we rely on it to do the computation for certain fields.
This is done through setting
fetch_on_update which will defer to the database’s generated value when a record is created or updated respectively.
All that needs to happen is your postgres is setup to have column defaults such as static values, subroutines, and sequences.
- Primary key fields
Since all tables will have a primary key, each database_model object must have a primary key.
Currently we are not able to have compound keys meaning your key must be a single type. This may be a situation where we can create a special type for it to allow it to be more semantic.
This primary key field is also used by our data accessor method to know how to lookup a record.
- These database models get all the benefits from Generation 2 and 3!
- Lack of foreign key relationships may be misleading to developers. To many developers, this code is their interface to the table schema. Since we use Alembic to manage our database schema and updates; there is actually no need to declare fields inside our database_model to be foreign keys. These are only used by SQL Alchemy in order to create your table schema. This does create a situation where our code and schema may not be in-sync. It is also worth noting that those foreign key constraints are still enforced by the database.
- Database managed fields have to be “optional”. One limitation of Pydantic is that we have to make any field that the database will manage (such as the primary_key) as Optional type. The Pydantic validation code runs before we ever make our fetch from the database. This is syntactically a bit confusing.
These new database model classes have greatly improved the readability of our code and reduced the amount of boilerplate we need to write for new objects which ultimately means that we spend more time worrying about our own business logic vs writing all the low level stuff classes usually need.
As with most projects in software engineering, we kept making marginal improvements to the ideas until we got something awesome! I’m excited to see how this idea continues to grow to further help us achieve these same goals.