Legacy Lobotomy — Overview of Methods to Seed Data in a Django Project

The photo was generated by the author using ChatGPT-4.

This is the 12th tutorial in the series about refactoring a legacy Django project, where we will take a look at methods to populate the test database with data. Even though we have automated tests which populate the database with the data they need to run, a well-populated database plays an important role in software development:

By using data that closely resembles what will be encountered in production, we can better simulate real user interactions and workflows.Populating the database with data of similar size and complexity as production allows for performance testing. This helps in identifying potential performance bottlenecks.Having the database populated with realistic data is crucial for demonstration purposes. Stakeholders prefer to see that John Smith likes playing football and basketball rather than “Test Test” likes playing “SpORt 134” and “fhjkdsf”.

To get more ideas regarding the project used for this tutorial or check other tutorials in this series, check the introductory article published earlier.

Legacy Lobotomy — Confident Refactoring of a Django Project

Overview of approaches to populate a Django database

There are multiple ways to populate a database with initial data in a Django app, each with its pros and cons. Let’s take a look at some of them.

Django admin panel

Using the Django admin panel to populate a database with initial data is a simple and straightforward method, ideal for quick demonstrations or when you don’t need many entries in the database. This approach allows you to use the convenient user interface provided by Django for easy data management. A detailed description of the Django admin panel can be found in the official Django documentation.


This method doesn’t require any technical knowledge; everything can be done in your browser.If you need to create just a few entries for quick demonstrations, it’s the ideal way to go.Provides a convenient user interface for managing data.


Requires manual data management.Although automation with tools like Selenium is possible, it is not very convenient.If you need to prepare a large dataset, this approach will be time-consuming and is not recommended.It’s not a scalable approach: if you need to populate another database (local or development) with data, you will need to do it manually.


Fixtures are a good way to load predefined data into the database. They can be created in various formats like JSON, XML, or YAML. You can create fixtures either manually or from an existing database. More information about fixtures can be found in the official Django documentation.


Fixtures are straightforward to create and load using Django’s built-in dumpdata and loaddata commands.Can be easily shared and used across different environments (development, staging, production), ensuring consistent data setup.Fixtures use standardized formats (JSON, XML, YAML), which can be easily generated and modified.


Fixtures are static snapshots of data. This can be a limitation when dealing with dynamic data or when the data needs to be frequently updated or customized.They don’t provide mechanisms for conditional data loading or complex data generation logic. This makes them less flexible compared to programmatic data generation methods.If the schema changes, fixtures can become outdated and may not load correctly without modification. Keeping fixtures in sync with the schema requires additional maintenance.Fixtures don’t support data transformation or processing during loading. Any required transformations must be done manually before creating the fixture.

Django data migrations

Another approach to populate the database with data is using data migrations. They are best used for scenarios where you need to ensure version-controlled and automated data setup or transformation that is tightly coupled with your schema changes. They are ideal for initial data setup and scenarios where tracking and auditing data changes are important, such as changing the database schema and data migration to a new schema. Here are the most common scenarios where such migrations can be a good fit:

Populating permissions and initial roles. When you first set up your application, you might want to create default roles and permissions.Adding default system settings. When your application requires certain system settings to be present from the start, data migrations can ensure these settings are in place.Transforming data during schema migrations. When a schema change requires transforming existing data, such as migrating a single name field into separate first and last name fields.


Data migrations are included in your source control, ensuring that data changes are tracked along with schema changes. This makes it easy to keep track of what data has been added or modified as part of your project’s history.Since data migrations are executed as part of the regular migration process, you can ensure that the data is applied consistently across different environments (development, staging, production).Data migrations run automatically when you apply migrations, which simplifies the deployment process by not requiring additional manual steps to populate data.You can easily reproduce the same data across multiple environments, ensuring that everyone working on the project has the same initial dataset.


Data migrations can slow down the migration process, especially if you are inserting a large amount of data.If the data required for different environments (development, staging, production) varies significantly, maintaining separate data migrations for each environment can become challenging and error-prone.

Custom scripts

Writing custom scripts to populate your database provides you with a high level of customization and flexibility. However, it also comes with its own set of challenges that should be carefully considered.


You can use any programming language, any ORM or direct SQL queries to populate the database with data, which makes this approach easy to use since developers can choose tools they are most experienced with.Once written, these scripts can be reused or modified for different environments or scenarios, saving time in the long run.Scripts can be maintained in version control systems and be either part of the current project or put into a separate repository.


Developing and maintaining custom scripts can be time-consuming, especially for large or complex datasets, because we cannot reuse the same mechanisms we use in generating data for tests.Scripts need to be updated every time the database schema changes, which adds additional maintenance costs.If you don’t need to generate a lot of data, this approach may introduce unnecessary complexity. In such a case, it’s better to use either Django admin panel or fixtures.

Django management commands

Using Django management commands in conjunction with factory_boy is a powerful and efficient way to populate your database with some realistic data. Like any other approach, this one also has its advantages and disadvantages.


Management commands are a native feature of Django, which makes them easy to integrate into the project without requiring external scripts or tools.The same as custom scripts, management commands are highly reusable.Using factory_boy for generating database entries allows us to create realistic data. Native support of Django models makes it easy to integrate with a Django project.This approach gives us the opportunity to reuse factories we built for using in tests, which reduces the development time.We can create automated tests for our commands and run them with other tests. This allows us to detect problems with the commands early.


Developers unfamiliar with custom Django management commands or factory_boy need time to study these topics before they can start creating commands for populating the database.Additional dependencies like factory_boy need to be managed and kept up to date.The same as custom scripts, should be better considered for generating large datasets. If you need just a few records to be created, Django admin panel or fixtures are a better way to go.As with any code, these commands and factories need to be maintained and updated every time models and database schema change.


In conclusion, choosing the right approach to populate a database in a Django project depends on your specific needs. Each method offers unique advantages and potential drawbacks. For small-scale and quick setups, the Django admin panel or fixtures might be sufficient. However, for more complex requirements, custom scripts or management commands provide greater flexibility and control. Data migrations are very helpful in the scenarios when we need the same data (configurations, permissions, etc.) to be present in all environments. Analyze your project’s requirements carefully to select the most suitable method for initializing your database efficiently. In the next tutorial, I will demonstrate how to create custom management commands for populating the database and explain why this approach is the best for this particular project.

Legacy Lobotomy — Overview of Methods to Seed Data in a Django Project was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.

​ Level Up Coding – Medium

about Infinite Loop Digital

We support businesses by identifying requirements and helping clients integrate AI seamlessly into their operations.

Gartner Digital Workplace Summit Generative Al

GenAI sessions:

  • 4 Use Cases for Generative AI and ChatGPT in the Digital Workplace
  • How the Power of Generative AI Will Transform Knowledge Management
  • The Perils and Promises of Microsoft 365 Copilot
  • How to Be the Generative AI Champion Your CIO and Organization Need
  • How to Shift Organizational Culture Today to Embrace Generative AI Tomorrow
  • Mitigate the Risks of Generative AI by Enhancing Your Information Governance
  • Cultivate Essential Skills for Collaborating With Artificial Intelligence
  • Ask the Expert: Microsoft 365 Copilot
  • Generative AI Across Digital Workplace Markets
10 – 11 June 2024

London, U.K.