Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

Phoebus's avatar

Laravel task scheduler after fatal error when withoutOverlapping is set

For example: withoutOverlapping is set and task runs every minute, and while developing I made some mistake and task throw fatal error, after fatal error this task doesn't run...

I think problem is that this task is locked because withoutOverlapping is set, and cause fatal error occurred task didn't unlocked.

I looking some way to unlock that process... any ideas?

0 likes
6 replies
plabbett's avatar
Level 1

The quick/easy fix is to rename the schedule. i.e.:

->name("MyNewJob")->withoutOverlapping();

I believe the real fix is to go to storage/frameworks and find a file that looks like this:

schedule-af133da702018a3ca3dd79d08a748

That is the "mutex path" which is checked against when running an event. Failed schedules don't clean up that file, which is why they won't re-initiate.

https://github.com/illuminate/console/blob/master/Scheduling/Event.php

/**
     * Get the mutex path for the scheduled command.
     *
     * @return string
     */
    protected function mutexPath()
    {
        return storage_path('framework/schedule-'.md5($this->expression.$this->command));
    }


/**
     * Do not allow the event to overlap each other.
     *
     * @return $this
     */
    public function withoutOverlapping()
    {
        $this->withoutOverlapping = true;
        return $this->skip(function () {
            return file_exists($this->mutexPath());
        });
    }
2 likes
devappau's avatar

+1 Same issue. If the queue stops, then the file never gets cleaned up. The scheduled jobs never re-run until the file is removed.

devappau's avatar

Changed my Kernel to this. Haven't tested cross platform, but works for my situation where the queue must always be running. PHP also needs the exec command, and some hosts may disable this for security. It checks to see if the queue is running, then checks for temporary mutex files, then deletes them.

  protected function schedule(Schedule $schedule)
    {

        exec("ps aux | grep -i 'artisan queue:work --daemon' | grep -v 'grep'", $pids);
        
        if(empty($pids)) {

        // Not running
        $file_pattern = storage_path() . DIRECTORY_SEPARATOR . "framework" . DIRECTORY_SEPARATOR . "schedule-*";
        $files = glob($file_pattern);

                if (count($files) > 0) {
                        foreach ($files as $file) {
                                unlink($file);
                        }
                }
        }

        $schedule->command("queue:work --daemon --sleep=5 --tries=1 --memory=4096 --queue=default")->everyMinute()->withoutOverlapping();

    }
}
gMagicScott's avatar

For running the queue, using a tool like Supervisor (this is what Forge uses), god, or monit is a much better solution. It will deal with failures, system restarts, and other issues more gracefully.

devappau's avatar

Yes, I agree, and that's one approach. Ideally Laravel should fix the error and clean up after itself. Storing to the file system is a bad idea, because if the queue fails, the queue is still deemed to be running by Laravel.

For my app, I would rather have the logic in the application given its traceable, and going to be run on many seperate instances of the application (in a shared hosting environment). One cron job should be all we need according to the documentation.

Chill's avatar

Or an even easier to implement and very robust solution is to run the cron script with "solo". It binds the execution of the script to a port. No need for mutexes or any config on Laravel's side at all.

I've been using this on every "non-overlapping" cron job I had for at least 6 years without a single problem.

Its beauty lies in its simplicity; the whole script fits in a post (which I'm including here). Yikes, the forum messed up the whitespace...

#!/usr/bin/perl -s
#
# solo v1.7
# Prevents multiple cron instances from running simultaneously.
#
# Copyright 2007-2016 Timothy Kay
# http://timkay.com/solo/
#
# It is free software; you can redistribute it and/or modify it under the terms of either:
#
# a) the GNU General Public License as published by the Free Software Foundation;
#    either version 1 (http://dev.perl.org/licenses/gpl1.html), or (at your option)
#    any later version (http://www.fsf.org/licenses/licenses.html#GNUGPL), or
#
# b) the "Artistic License" (http://dev.perl.org/licenses/artistic.html), or
#
# c) the MIT License (http://opensource.org/licenses/MIT)
#

use Socket;

alarm $timeout                              if $timeout;

$port =~ /^\d+$/ or $noport                     or die "Usage: $0 -port=PORT COMMAND\n";

if ($port)
{
    # To work with OpenBSD: change to
    # $addr = pack(CnC, 127, 0, 1);
    # but make sure to use different ports across different users.
    # (Thanks to  www.gotati.com .)
    $addr = pack(CnC, 127, $<, 1);
    print "solo: bind ", join(".", unpack(C4, $addr)), ":$port\n"   if $verbose;

    $^F = 10;           # unset close-on-exec

    socket(SOLO, PF_INET, SOCK_STREAM, getprotobyname('tcp'))       or die "socket: $!";
    bind(SOLO, sockaddr_in($port, $addr))               or $silent? exit: die "solo($port): $!\n";
}

sleep $sleep if $sleep;

exec @ARGV;

Use it like this

* * * * * solo -port=3801 /usr/local/bin/awesome-script.sh blah blah

For more info: https://blog.josephscott.org/2011/09/26/solo/

Please or to participate in this conversation.