Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

creamCheeseCoder's avatar

Good practice on deciding between writing a migration script or run one time SQL on production?

So, I have always been curious and wanted to know which one is the better and recommended approach and a good design practice.

Let's say we need to update a massive table and this is how my migration looks like this:

public function up() { \DB::transaction(function () { $moviePeople = Feature::updateOrCreate(['prefix' => 'moviePeople], ['name' => 'Movie Watchers']); $rating = Feature::updateOrCreate(['prefix' => 'rating'], ['name' => 'Rating']);

        $personalityTypeIds = PersonalityType::whereIn('name', ['crazy', 'smart'])->get()->pluck('id');
        Company::whereIn('company_role_id', $personalityTypeIds)
            ->chunk(500, function ($companies) use ($moviePeople, $rating) {
                $companies->each(function ($company) use ($moviePeople, $rating) {
                    if (!$company->features->contains($moviePeople)) {
                        $company->features()->attach($moviePeople, [
                            'billed_as' => BillingPolicy::TypeSpecific->value
                        ]);
                    }
                    if (!$company->features->contains($rating)) {
                        $company->features()->attach($rating, [
                            'billed_as' => BillingPolicy::Unlimited->value
                        ]);
                    }
                });
            });
    });
}

Is it a good idea to let the migration like this sit in the migrations folder for forever? OR should a thing of this nature be done with a one time SQL script on production database?

0 likes
2 replies
LaryAI's avatar
Level 58

When deciding between writing a migration script or running a one-time SQL script on a production database, it's important to consider the purpose and the context of the changes you are making.

Migrations are typically used for schema changes, such as creating or altering tables, adding indexes, etc. They are part of your version control system and ensure that your database schema is consistent across all environments. Migrations also allow for easy rollback in case something goes wrong.

However, migrations can also be used for data transformations, but this should be done with caution. If the data transformation is part of a deployment that needs to be replicated across multiple environments (development, staging, production), and it's important for the codebase to reflect this change, then a migration might be appropriate.

On the other hand, if the data update is a one-time correction or update that doesn't need to be replicated in other environments or is specific to the state of the production database, a one-time SQL script might be more suitable. This is especially true for very large datasets where the migration could take a long time to run and potentially cause downtime.

For the example you provided, it seems like you are performing data updates rather than schema changes. Here are some considerations:

  1. If the data update needs to be part of the deployment process and tracked with your application's version control, use a migration.
  2. If the update is a one-time fix or adjustment and doesn't need to be replicated across all environments, consider a one-time SQL script.
  3. If the operation is potentially long-running and could impact the availability of your application, it might be better to perform the operation during a maintenance window with a one-time script.
  4. Always ensure you have a backup before performing operations on production data.

Given these considerations, here's a possible solution:

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

class UpdateFeaturesForCompanies extends Migration
{
    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        // This is a placeholder for schema changes, if any.
        // Schema::table('companies', function (Blueprint $table) {
        //     // Perform schema updates...
        // });

        // Perform data updates in a separate command or job if it's a long-running task.
        // You can dispatch a job here that will handle the data updates.
        // This way, the migration only handles schema changes, and the job handles the data.
        // Dispatch job to update company features...
        UpdateCompanyFeaturesJob::dispatch();
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        // If you can write a reverse operation for your data updates, do it here.
        // Otherwise, you might need to manually reverse the changes or restore from a backup.
    }
}

In this solution, you would create a job (UpdateCompanyFeaturesJob) that handles the data updates. This job can be dispatched from the migration, but the actual data processing is done outside of the migration. This approach keeps your migrations clean and focused on schema changes, while still allowing you to track and version your data updates as part of your codebase.

Please or to participate in this conversation.