You're encountering a classic timing and filesystem synchronization problem—especially common when using a non-local filesystem (such as S3, or network filesystems), queues processed by workers on different machines, or when there’s a race condition between flush and file visibility.
What's happening?
- When you
storethe file, Laravel writes it to the filesystem. On a local disk, this happens instantly. - If you're using a remote disk (like S3), "store" might return before the file is actually visible/propagated.
- If your queue is running in a separate process/server/container,
Storage::disk()->exists()in the job could point to a different context or hit S3/network lag before the file is "found".
That's why sometimes introducing a delay helps and why you still experience a failure even when your controller confirms existence.
Solutions
1. Use a Synchronous Queue (for Testing)
If using a remote disk, ensure your queue is set to sync for local testing (in config/queue.php, set default => 'sync'). If everything works, it's likely a propagation or visibility issue with your storage or queue configuration.
2. Dispatch the Job After Transaction Commit
If you’re saving a model or working with a transaction, you may be dispatching the job before the file is actually committed to disk. Ensure the job only fires after everything is finalized.
In Laravel, wrap your job dispatch in afterCommit():
ProcessRfpDocument::dispatch(
$jobId,
$validated,
$filePath,
auth()->id(),
$originalFileName,
$fileMimeType,
$fileSize
)->afterCommit();
This makes sure the job is only dispatched after the current DB transaction and filesystem work is complete.
3. Consider Non-Serializable Uploaded File Handling
Remove the actual UploadedFile instance from any array you serialize—just store the path or blob.
You're already doing this:
unset($validated['scope_of_work_file']);
Good!
4. If on S3, Use temporaryUrl or Presigned URLs
If your jobs may run in other environments or EC2 workers, instead of passing $filePath, pass a temporaryUrl:
$tmpUrl = Storage::disk('public')->temporaryUrl(
$filePath,
now()->addMinutes(10)
);
The job can then fetch the file using Guzzle or any HTTP client, avoiding race conditions in S3 propagation.
5. Improve Retry Logic / Use Backoff
If you must check periodically in the job, make your retry exponential to wait longer before failing:
$maxRetries = 20;
$baseWaitMs = 200;
$found = false;
for ($i = 0; $i < $maxRetries; $i++) {
if (Storage::disk('public')->exists($this->filePath)) {
$found = true;
break;
}
usleep($baseWaitMs * pow(2, $i) * 1000); // Exponential backoff
}
if (!$found) {
throw new \Exception("File not found: {$this->filePath}");
}
6. Double-check Filesystem Configuration
- Make sure both your HTTP (file upload) and your queue worker use the same filesystem
diskconfiguration and point to the same storage backend and credentials/environment variables. - If on local disk: make sure both processes (web and worker) share the same physical file system.
TL;DR Checklist
- Use
afterCommit()when possible on job dispatch. - Use more intelligent waits/backoff if absolutely necessary, but don't rely on it.
- For S3 or networked filesystems, consider using presigned/temporary URLs.
- Ensure all services/processes use the same storage and environment.
- For local development, set queue to
sync.
If you post your storage configuration and queue setup, I can give more tailored guidance. But most likely: adding ->afterCommit() to your job dispatch will do the trick!
Example Fix:
ProcessRfpDocument::dispatch(
$jobId,
$validated,
$filePath,
auth()->id(),
$originalFileName,
$fileMimeType,
$fileSize
)->afterCommit();
Let me know which storage and queue driver you’re using if issues persist.