The reason you're getting untraceable errors is that running everything through a single master job creates a massive single point of failure. If one sub-task throws a silent exception, exceeds a memory limit, or times out, it kills the whole execution cycle and muddies your stack trace.
If you need to roll your own for specific sysadmin requirements, you need to change your architecture. Instead of one master job triggering an Artisan command, query your DB in your routes/console.php (or Console/Kernel.php depending on your version) and schedule isolated queued jobs dynamically.
use App\Jobs\ExecuteSysAdminTask;
use App\Models\ScheduledTask;
use Illuminate\Support\Facades\Schedule;
$tasks = ScheduledTask::where('is_active', true)->get();
foreach ($tasks as $task) {
Schedule::job(new ExecuteSysAdminTask($task))
->cron($task->cron_expression)
->onSuccess(function () use ($task) {
// Log success to your custom DB table
$task->logs()->create(['status' => 'success']);
})
->onFailure(function () use ($task) {
// Log failure to DB
$task->logs()->create(['status' => 'failed']);
});
}
Dispatch them onto a queue worker (like Redis + Horizon). By pushing each DB task to its own isolated job on the queue, failures are completely contained. Task A failing won't stop Task B, and your queue worker will give you the exact stack trace, memory usage, and retry capabilities for each individual task.