imranbru's avatar

imranbru wrote a reply+100 XP

1d ago

organization owner is the one who needs to generate the token

imranbru's avatar

imranbru wrote a reply+100 XP

2d ago

The single-request approach is a ticking time bomb for max_execution_time and memory exhaustion. Once users start dropping gigabytes of files, your server will choke and browser connections will timeout. You can use Chunked Uploads or Asynchronous Processing

imranbru's avatar

imranbru wrote a reply+100 XP

3d ago

@martinbean Ouch! Not a bot, I just like formatting my answers clearly with markdown and code snippets. But I actually agree with your primary point @adamnet, Martin is right that handling this via roles/permissions in a single project is usually the standard Laravel way to handle this, unless there is a strict hardware/business reason forcing them to live on completely different servers.

imranbru's avatar

imranbru wrote a reply+100 XP

3d ago

In Project 1 (The Storage Owner)

Create a protected controller that handles the file logic. Use a custom middleware or a simple API token to ensure only Project 2 can hit these endpoints.

// routes/api.php
Route::prefix('internal-docs')->group(function () {
    Route::get('/', [DocumentController::class, 'index']);      // List files
    Route::get('/{name}', [DocumentController::class, 'show']); // Stream/Download
    Route::delete('/{name}', [DocumentController::class, 'destroy']); // Delete
});

// DocumentController.php
public function show($name) {
    if (!Storage::disk('private')->exists("documents/{$name}")) abort(404);
    
    return Storage::disk('private')->response("documents/{$name}");
}

public function destroy($name) {
    Storage::disk('private')->delete("documents/{$name}");
    return response()->json(['message' => 'Deleted']);
}

Project 2 (The Admin Project)

You don't need the physical files. You just need to "proxy" the requests. When an admin wants to see a PDF, Project 2 fetches it from Project 1 and serves it to the admin's browser.

// Project 2 Controller
public function proxyView($fileName) {
    $response = Http::withToken('your-secret-token')
        ->get("https://project1.test/api/internal-docs/{$fileName}");

    return response($response->body(), 200, [
        'Content-Type' => 'application/pdf',
        'Content-Disposition' => 'inline; filename="'.$fileName.'"'
    ]);
}
imranbru's avatar

imranbru wrote a reply+100 XP

3d ago

I try to Keep a physical notepad right under your keyboard and i try to avoid Nuke the feeds

imranbru's avatar

imranbru wrote a reply+100 XP

3d ago

Hey Vincent,

You're missing a return statement in your controller. Inertia requires a redirect after a mutation (POST/PUT/PATCH/DELETE) so it can fetch the fresh page props under the hood.

public function update(Request $request, Category $category)
{
    $category->fill($request->all());
    $category->save();

    return back(); // <-- You need this
}
imranbru's avatar

imranbru wrote a reply+100 XP

3d ago

99% of the time I see this break, it's because you are testing locally over HTTP.

The __Host- prefix tells the browser to strictly enforce HTTPS. If you're using php artisan serve or an un-secured Valet/Herd site, the browser will silently drop the cookie. Inertia then fails to authenticate because the session cookie literally doesn't exist in your browser, causing 419 or 401 errors.

'cookie' => env('APP_ENV') === 'production' 
    ? '__Host-' . Str::slug(env('APP_NAME', 'laravel'), '_') . '_session'
    : Str::slug(env('APP_NAME', 'laravel'), '_') . '_session', 

Also, triple-check your .env file. If you have SESSION_DOMAIN=localhost (or anything else), remove it entirely. The __Host- spec requires the domain attribute to be completely omitted, not just null.

imranbru's avatar

imranbru wrote a reply+100 XP

3d ago

Go to Settings > Languages & Frameworks > Style Sheets > Tailwind CSS. Ensure your Node interpreter is set and the path to your tailwind.config.js is correct

imranbru's avatar

imranbru wrote a reply+100 XP

3d ago

If both projects are on the same server, the quickest way is a symbolic link.

ln -s /path/to/project-1/storage/app/private/documents /path/to/project-2/storage/app/private/project1_docs

However, if you're looking for the "correct" architectural way or if these projects might move to different servers, you have two main options:

You can do this one

Move the files to an S3 bucket or DigitalOcean Space. Both projects can then use the s3 driver in config/filesystems.php to point to the same bucket. It handles permissions and scaling without you worrying about physical paths.

or do via API Proxy

If Project 1 must remain the "owner" of the files, create a secured endpoint in Project 1 that streams the file:

// Project 1 - Controller
public function show(Document $doc) {
    // Validate Project 2's request/token
    return Storage::disk('private')->download($doc->path);
}

Then, in Project 2, you'd use Http::withToken(...)->get() to fetch or stream it to the user.

imranbru's avatar

imranbru wrote a reply+100 XP

3d ago

intercept the click, show the alert, and then trigger the method via $wire if they confirm.

Drop the <script> hook entirely and just do this on your view:

<button 
    x-data 
    @click.prevent="
        Swal.fire({
            title: 'Are you sure?',
            text: 'This action is permanent!',
            icon: 'warning',
            showCancelButton: true,
            confirmButtonText: 'Yes, delete it!'
        }).then((result) => {
            if (result.isConfirmed) {
                $wire.delete({{ $id }})
            }
        })
    "
>
    Delete
</button>

alse use wire:confirm:

<button wire:click="delete({{ $id }})" wire:confirm="Are you sure? This action is permanent!">
    Delete
</button>
imranbru's avatar

imranbru was awarded Best Answer+1000 XP

3d ago

This happens because of how Livewire scopes its DOM tree. When you push the search input into <x-slot name="rightSection">, Blade extracts that code and renders it outside of your Livewire component's root <div> in the app-admin.blade.php layout. Livewire only tracks and hydrates wire:model bindings that live inside its root element (which is what gets rendered in {{ $slot }}). Because the named slot is injected elsewhere in the DOM, Livewire's JavaScript simply can't see it.

Keep the input physically inside your Livewire component's default slot, but "teleport" it visually to the header.

Add an ID to the target container in your app-admin.blade.php layout:

<div id="right-section-container" class="flex justify-center items-center gap-2">
    {{ $rightSection ?? '' }}
</div>

In your Livewire component, remove the search input from the <x-slot> and use <template x-teleport="..."> inside your main default slot:

<div>
    <x-slot name="title">...</x-slot>
    <x-slot name="breadcrumbs">...</x-slot>
    <x-slot name="rightSection">
        <x-buttons.create href="#" x-data="{}" x-on:click="$dispatch('open-modal', { name: 'create-job-title' })" :name="__('app.create_job_title')" />
        <x-dropdowns.per-page />
    </x-slot>

    <template x-teleport="#right-section-container">
        <x-inputs.search wire:model.live="search" />
    </template>

    </div>

This keeps the input inside Livewire's component state/scope while rendering it exactly where you want in the layout UI.

imranbru's avatar

imranbru started a new conversation+100 XP

4d ago

Hi everyone,

I'm building a SaaS application where users can connect their own business email accounts. I'm storing their email credentials and dynamically configuring the mailer so they can read and reply to emails directly from the app.

Viewing the incoming emails works perfectly fine, but I'm running into an issue when trying to send an outgoing email (specifically, replying to a message).

Whenever the app attempts to send, it fails and throws this error in my logs:

[2026-04-23 11:31:03] production.ERROR: PHP Request Shutdown: SECURITY PROBLEM: insecure server advertised AUTH=PLAIN (errflg=1) {"userId":1,"exception":"[object] (ErrorException(code: 0): PHP Request Shutdown: SECURITY PROBLEM: insecure server advertised AUTH=PLAIN (errflg=1) at Unknown:0)

Here is the method I'm using to dynamically configure the SMTP settings and send the email:

As you can see, I even tried forcing the stream context to ignore SSL verification just in case it was a strict certificate issue, but the error persists.

Has anyone run into this AUTH=PLAIN security problem when dynamically configuring mailers in Laravel? Any guidance on what I might be missing here would be hugely appreciated!

Thanks in advance!

imranbru's avatar

imranbru wrote a reply+100 XP

4d ago

use http, useragent, to make it more safe

imranbru's avatar

imranbru wrote a reply+100 XP

4d ago

looking good listen, but i try to avoid these as each application need different needs based on application structure i install these package which the application super faster, unnecessary data make it more complex

imranbru's avatar

imranbru wrote a reply+100 XP

4d ago

check your page source which data google can get check and develop it. Google never get frontend data. somedays ago i developed a next js application which page data showing fine in frontend and pagesource failed to show these data. finally i solve it,,page source and understand the technical seo issue from: Screaming Frog SEO Spider: https://prnt.sc/qQFnPfXgOYRp

imranbru's avatar

imranbru wrote a reply+100 XP

4d ago

choose hostinger for 2 years, it will save money. i used in fastvps for 2-3 years,, monthly 25 euro, same service i bought from hostinger 2 year only 125 usd and better & fastest

imranbru's avatar

imranbru wrote a reply+100 XP

6d ago

If you want to stick to your raw SQL string approach, you just change the column name in the query:

DB::delete('DELETE FROM users WHERE account_status = ?', [$status]);

or you can follow this also

DB::table('users')->where('account_status', $status)->delete();
imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

You can use Spatie Roles & Permissions package or You can make custom role system like admin will be admin model and user will be user model,,Actually all depends on you what you want

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

It sounds like the auto-switch lifecycle is hitting a generic toggle() endpoint instead of explicitly enforcing a complete state. Re-watching an already finished video is just flipping the boolean back to false.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

this is simple issue, i solve these type just removing the condition which is located C:\laragon\www\qna\vendor\composer\platform_check.php then it's ok

imranbru's avatar

imranbru was awarded Best Answer+1000 XP

1w ago

Your logic is actually fine. The problem is almost certainly your database migration.

Google access tokens are frequently longer than 255 characters. If you used $table->string('access_token') in your migration, Laravel defaults to a VARCHAR(255) and is silently truncating your token when saving it to the database.

Here is exactly why your test behaved that way: When you omit the created key, the Google client assumes the token is expired and forces a refresh. It then uses the full, fresh token directly from memory, which is why it works. However, when you include created, the client checks the timestamp, sees it hasn't been an hour yet, and sends the truncated (corrupted) token from the database to Google. Google rejects it, triggering the 401 Unauthorized.

Change your columns to text in your database migration:

$table->text('access_token');
$table->text('refresh_token');

Clear out your oauth_credentials table and re-authenticate to store a fresh, full-length token.

Avoid putting heavy external API calls inside a __construct(). If Google's API goes down, times out, or rate-limits you, your entire class fails to instantiate and it can crash Laravel's service container resolution. Move the token initialization to a dedicated connect() or boot() method that you call when you actually need to interact with the Drive API.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

Good catch. It usually comes down to how they factor in time and consistency.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

The error is literal it can't find the compiled bundle. Inertia 2.0+ (and 3.x) changed how it looks for the SSR entry point, usually defaulting to the bootstrap/ssr directory.

First, make sure you've actually built the SSR bundle:

npm run build

Next, check your config/inertia.php. The path there must match exactly where Vite is outputting your ssr.mjs (or .js) file:

'ssr' => [
    'enabled' => true,
    'bundle' => base_path('bootstrap/ssr/ssr.mjs'), 
],

If the file exists but the extension is different (e.g., ssr.js), update the config to match. Once the file is physically present at that path, the artisan command will work.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

This happens because of how Livewire scopes its DOM tree. When you push the search input into <x-slot name="rightSection">, Blade extracts that code and renders it outside of your Livewire component's root <div> in the app-admin.blade.php layout. Livewire only tracks and hydrates wire:model bindings that live inside its root element (which is what gets rendered in {{ $slot }}). Because the named slot is injected elsewhere in the DOM, Livewire's JavaScript simply can't see it.

Keep the input physically inside your Livewire component's default slot, but "teleport" it visually to the header.

Add an ID to the target container in your app-admin.blade.php layout:

<div id="right-section-container" class="flex justify-center items-center gap-2">
    {{ $rightSection ?? '' }}
</div>

In your Livewire component, remove the search input from the <x-slot> and use <template x-teleport="..."> inside your main default slot:

<div>
    <x-slot name="title">...</x-slot>
    <x-slot name="breadcrumbs">...</x-slot>
    <x-slot name="rightSection">
        <x-buttons.create href="#" x-data="{}" x-on:click="$dispatch('open-modal', { name: 'create-job-title' })" :name="__('app.create_job_title')" />
        <x-dropdowns.per-page />
    </x-slot>

    <template x-teleport="#right-section-container">
        <x-inputs.search wire:model.live="search" />
    </template>

    </div>

This keeps the input inside Livewire's component state/scope while rendering it exactly where you want in the layout UI.

imranbru's avatar

imranbru was awarded Best Answer+1000 XP

1w ago

This is a classic gotcha with Blaze. Your footer-admin works fine because it's likely just static HTML, but your sidebar is tripping up the compiler.

Blaze is primarily designed as a drop-in replacement for anonymous, stateless components. Sidebars almost always break this rule in one of two ways:

Never pass your root views/components directory into Blaze. You only want to optimize "dumb", presentational UI components (buttons, badges, icons, cards, footers).

Update your AppServiceProvider to specifically target directories that don't rely on global state or backing classes:

public function boot(): void
{
    // Target specific UI folders instead of everything
    Blaze::optimize()
        ->in(resource_path('views/components/ui'))
        ->in(resource_path('views/components/icons'))
        ->in(resource_path('views/components/footers'));
}

Exclude complex structural components like navbars and sidebars from Blaze entirely and you'll be good to go.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

Looks like your daily npm update bumped Webpack to a version with stricter schema validation, and now webpackbar is choking because it's passing options that are no longer supported.

The absolute fastest way to unblock your production build is to simply disable the progress bar in your webpack.mix.js file:

mix.options({
    progress: false
});

If you really want the progress bar back, nuke your node_modules directory and package-lock.json file, then run a fresh npm install. Running npm update daily can sometimes leave you with a mangled sub-dependency tree between webpack and webpackbar.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

The reason you're getting untraceable errors is that running everything through a single master job creates a massive single point of failure. If one sub-task throws a silent exception, exceeds a memory limit, or times out, it kills the whole execution cycle and muddies your stack trace.

If you need to roll your own for specific sysadmin requirements, you need to change your architecture. Instead of one master job triggering an Artisan command, query your DB in your routes/console.php (or Console/Kernel.php depending on your version) and schedule isolated queued jobs dynamically.

use App\Jobs\ExecuteSysAdminTask;
use App\Models\ScheduledTask;
use Illuminate\Support\Facades\Schedule;

$tasks = ScheduledTask::where('is_active', true)->get();

foreach ($tasks as $task) {
    Schedule::job(new ExecuteSysAdminTask($task))
             ->cron($task->cron_expression)
             ->onSuccess(function () use ($task) {
                 // Log success to your custom DB table
                 $task->logs()->create(['status' => 'success']);
             })
             ->onFailure(function () use ($task) {
                 // Log failure to DB
                 $task->logs()->create(['status' => 'failed']);
             });
}

Dispatch them onto a queue worker (like Redis + Horizon). By pushing each DB task to its own isolated job on the queue, failures are completely contained. Task A failing won't stop Task B, and your queue worker will give you the exact stack trace, memory usage, and retry capabilities for each individual task.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

This is classic behavior of PHP loading the compiled files from memory instead of reading your disk changes. Since your error page preview shows the updated code but the execution is using the old logic, it essentially guarantees this is an in-memory caching issue.

OPcache is overly aggressive Run php --ini in your terminal to find out which config file your CLI is using. Open it up and look for your OPcache block. For a local dev environment, OPcache should ideally be disabled, or at the very least, configured to check for file changes instantly.

Make sure your settings look like this:

opcache.enable=1
opcache.enable_cli=1
opcache.validate_timestamps=1 ; This is the crucial one. If set to 0, it never checks for updates.
opcache.revalidate_freq=0     ; Forces it to check on every single request.

Note: Alternatively, just set opcache.enable=0 and opcache.enable_cli=0 for local dev to bypass it entirely.

You're running Laravel Octane If you are serving the application using Laravel Octane (FrankenPHP, Swoole, or RoadRunner), the entire framework is booted into RAM once. Changes to controllers, middleware, or service providers will never reflect until the server restarts.

If this is your stack, you must start the server with the watch flag so it auto-reloads on file saves:

php artisan octane:start --watch

Check that php.ini first. 99% of the time, validate_timestamps got toggled off or you installed a pre-configured PHP package on CachyOS that optimized it for production by default.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

Good catch. This looks like a scaffolding bug in the starter kit where it missed pulling the TooltipProvider from the upstream shadcn/ui sidebar component.

Your fix is spot on. For anyone else finding this thread, just manually patch your resources/js/components/ui/sidebar.tsx:

Ensure the imports are there at the top:

import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from "@/components/ui/tooltip"

Wrap the main div inside your SidebarContext.Provider:

return (
    <SidebarContext.Provider value={contextValue}>
      <TooltipProvider delayDuration={0}>
        <div
          data-slot="sidebar-wrapper"
          style={
            {
              "--sidebar-width": SIDEBAR_WIDTH,
              "--sidebar-width-icon": SIDEBAR_WIDTH_ICON,
              ...style,
            } as React.CSSProperties
          }
          className={cn(
            "group/sidebar-wrapper has-data-[variant=inset]:bg-sidebar flex min-h-svh w-full",
            className
          )}
          {...props}
        >
          {children}
        </div>
      </TooltipProvider>
    </SidebarContext.Provider>
  )
}

If you have a spare few minutes, it's worth opening a quick PR or issue on the Laravel Breeze/Jetstream GitHub repo (depending on which you used to scaffold) so the team can patch the stubs for everyone else.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

Hola Graciela. Con 10 años de experiencia lidiando con este tipo de sistemas, te doy un consejo directo: ve por la API de WhatsApp. Las notificaciones Push (web) los pacientes las suelen bloquear o ignorar, y las Push (móviles) requieren que descarguen una app. WhatsApp tiene una tasa de apertura de casi el 98%.

Si estás trabajando con Laravel, esta es la arquitectura exacta que utilizo para resolver esto:

Usa Twilio o la Cloud API oficial de Meta. Ambas se integran excelente con los Channels de Laravel Notifications.

Crea un comando (php artisan make:command SendAppointmentReminders) y prográmalo en tu Console/Kernel.php para que corra todos los días en la mañana:

$schedule->command('appointments:remind')->dailyAt('08:00');

Para que el usuario confirme sin tener que hacer login, Laravel tiene una función perfecta llamada Signed URLs. En tu comando, cuando iteres sobre las citas de mañana, generas un link único y temporal:

$confirmUrl = URL::temporarySignedRoute(
    'appointments.confirm', 
    now()->addDays(2), // expira en 2 días
    ['appointment' => $appointment->id]
);

Creas la notificación (php artisan make:notification AppointmentReminder) y le pasas la URL. El mensaje que envíes a WhatsApp sería algo como: "Hola Juan, recuerda tu cita mañana a las 10:00am. Para confirmar tu asistencia, haz clic aquí: $confirmUrl"

El usuario hace clic en WhatsApp y se abre el navegador. En tu controlador, primero proteges la ruta con el middleware de Laravel para validar la firma y luego actualizas el estado:

public function confirm(Request $request, Appointment $appointment)
{
    if (! $request->hasValidSignature()) {
        abort(401, 'El enlace ha expirado o no es válido.');
    }

    $appointment->update(['status' => 'confirmed']);

    return view('appointments.success-message'); // "¡Gracias por confirmar!"
}

Es un flujo súper sólido, seguro (por la ruta firmada) y es exactamente la experiencia "sin fricción" que los pacientes necesitan hoy en día. ¡Éxito con la implementación!

imranbru's avatar

imranbru liked a comment+100 XP

1w ago

Okay, its no big deal, but it just affects my flow.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

Hey Roger, you're not reinventing the wheel at all. Since you're doing a standard HTTP redirect instead of a Livewire request, flashing to the session and picking it up with JS on the next page load is exactly the right approach.

I just want to strongly echo LaryAI's note on using @js() instead of {{ }} in your script. Definitely make that switch it automatically JSON-encodes the string and prevents potential XSS vulnerabilities if your toast messages ever end up including unescaped user input down the line.

Extracting your snippet into a simple <x-toast-handler /> blade component and dropping it at the bottom of your main layout is exactly how I handle this in production. Good stuff!

imranbru's avatar

imranbru liked a comment+100 XP

1w ago

Thanks for the reply ☺️.

Actually this was the issue it was silently truncating the data and it get's only stored a allowed bytes of charcters to the table.

Yeah, I have already fixed and moved the constructor logic out and created a method authenticate and using it simply

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

Most Welcome

imranbru's avatar

imranbru liked a comment+100 XP

1w ago

thanks for your explanation i really appreciate it, it was so clear to me i think it will work fine and it will help other to fix their issues, for the next time i will try all you said, i will replay my feedback on it.

thank you again.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

They actually are still wrapped, but assuming you're using a modern starter kit like Breeze or Jetstream, they default to using the <x-guest-layout> component instead of your primary app layout. It's designed that way so your login and register screens get a clean slate (usually a centered card) without your main authenticated navigation and sidebars bleeding in.

If you want them to share your global layout, just pop open your auth views (like resources/views/auth/login.blade.php) and swap out <x-guest-layout> for <x-app-layout>.

imranbru's avatar

imranbru was awarded Best Answer+1000 XP

1w ago

I know you already switched to dompdf, but if you ever need modern CSS (flexbox/grid) back, here is the actual solution. The AI completely missed the root cause.

The specific error not a snap cgroup happens because recent Ubuntu versions on Forge install Chromium as a Snap package. Snap's strict AppArmor/cgroup confinement blocks it from being executed by daemon services like php-fpm. Running config:cache likely just triggered a worker/FPM restart that temporarily masked the issue before the cgroup restrictions clamped down again.

To fix it permanently, you need to ditch the Snap version entirely and install the native Google Chrome binary.

SSH into Forge and nuke the snap version:

sudo snap remove chromium

Install the native Google Chrome package:

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
sudo apt-get update
sudo apt-get install -y google-chrome-stable

Update your PDF generation:

return Pdf::view('PDF.invoice', compact('invoice'))
    ->format('a4')
    ->landscape()
    ->name("{$invoice->invoice_no}-{$invoice->created_at->format('Y-m-d')}.pdf")
    ->withBrowsershot(function ($browsershot) {
        $browsershot->setChromePath('/usr/bin/google-chrome-stable')
                    ->addChromiumArguments(['--no-sandbox', '--disable-setuid-sandbox']);
    })
    ->download();

Dompdf is fine for basic tables, but applying this fix takes 2 minutes and gets Spatie/Browsershot running flawlessly on Forge forever.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

This is a classic gotcha with Blaze. Your footer-admin works fine because it's likely just static HTML, but your sidebar is tripping up the compiler.

Blaze is primarily designed as a drop-in replacement for anonymous, stateless components. Sidebars almost always break this rule in one of two ways:

Never pass your root views/components directory into Blaze. You only want to optimize "dumb", presentational UI components (buttons, badges, icons, cards, footers).

Update your AppServiceProvider to specifically target directories that don't rely on global state or backing classes:

public function boot(): void
{
    // Target specific UI folders instead of everything
    Blaze::optimize()
        ->in(resource_path('views/components/ui'))
        ->in(resource_path('views/components/icons'))
        ->in(resource_path('views/components/footers'));
}

Exclude complex structural components like navbars and sidebars from Blaze entirely and you'll be good to go.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

Your logic is actually fine. The problem is almost certainly your database migration.

Google access tokens are frequently longer than 255 characters. If you used $table->string('access_token') in your migration, Laravel defaults to a VARCHAR(255) and is silently truncating your token when saving it to the database.

Here is exactly why your test behaved that way: When you omit the created key, the Google client assumes the token is expired and forces a refresh. It then uses the full, fresh token directly from memory, which is why it works. However, when you include created, the client checks the timestamp, sees it hasn't been an hour yet, and sends the truncated (corrupted) token from the database to Google. Google rejects it, triggering the 401 Unauthorized.

Change your columns to text in your database migration:

$table->text('access_token');
$table->text('refresh_token');

Clear out your oauth_credentials table and re-authenticate to store a fresh, full-length token.

Avoid putting heavy external API calls inside a __construct(). If Google's API goes down, times out, or rate-limits you, your entire class fails to instantiate and it can crash Laravel's service container resolution. Move the token initialization to a dedicated connect() or boot() method that you call when you actually need to interact with the Drive API.

imranbru's avatar

imranbru wrote a reply+100 XP

1w ago

You need to override the resolveRecord() method in your Filament Importer class. Since you're dealing with two tables, handle the User creation first, extract the resulting ID, and then return the associated Teacher instance.

Using firstOrCreate for the user and firstOrNew for the teacher perfectly handles your "not existing records" requirement without throwing duplication errors.

use App\Models\User;
use App\Models\Teacher;
use Illuminate\Support\Str;
use Illuminate\Database\Eloquent\Model;


protected function resolveRecord(): ?Model
{
    $user = User::firstOrCreate(
        ['email' => $this->data['email']],
        [
            'name' => $this->data['name'],
            'password' => bcrypt($this->data['password'] ?? Str::random(12)),
        ]
    );

    return Teacher::firstOrNew(
        ['employee_number' => $this->data['employee_number']],
        [
            'user_id' => $user->id,
            'slug' => $this->data['slug'] ?? Str::slug($this->data['name']),
            'hire_date' => $this->data['hire_date'],
        ]
    );
}

Just ensure your importer's getColumns() method includes the definitions for name, email, and password, even though they belong to the users table. Filament will load them into $this->data so you can intercept them here.

imranbru's avatar

imranbru liked a comment+100 XP

1w ago

Hello, can someone help me in filament importer I want to import not existing records for example I have teachers table and users tables.

users column -> name, email, password

teachers column -> employee_number, slug, hire_date

imranbru's avatar

imranbru liked a comment+100 XP

2w ago

I created a category table recursively, meaning that in one table, subcategories reference the parent. This resulted in a high degree of nesting. A product belongs to a child category, and using a recursive method in the model, I retrieve all the nested relationships. I need to get a collection of the last relationship in the chain, but I'm getting "null." Could you tell me how to do this?

Here's the code.

/** class Product extends Model **/
public function category() {
      return $this->belongsTo(Category::class, 'category_id');
  }

/** class Category extends Model **/
public function parentCategories()
    {
        return $this->belongsTo(Category::class, 'category_id')->with('parentCategories');
    }
/** Controller **/

$product = Product::with('category.parentCategories')->find(2);
$parent_cat = $product->category->parent_categories;
dd($parent_cat); // null

imranbru's avatar

imranbru wrote a reply+100 XP

2w ago

A couple of things to check here. $product->category->parent_categories is likely returning null because either that specific category has no parent, or you're running into a casing issue (try $product->category->parentCategories). Also, keep in mind belongsTo returns a single model, not a Collection.

Since you are already eager-loading the entire chain using ->with('parentCategories'), you can traverse the loaded models in memory to find the root recursively without running N+1 database queries.

Add this accessor to your Category model:

public function getRootCategoryAttribute()
{
    return $this->parentCategories ? $this->parentCategories->root_category : $this;
}

Then in your controller:

$product = Product::with('category.parentCategories')->find(2);

$rootCategory = $product->category->root_category; 

Keep Remind Please While a recursive with() works for shallow relationships, it fires a new database query for every level of depth and will quickly chew up memory. If your nesting is truly deep, I highly recommend using recursive CTEs. Drop in the staudenmeir/laravel-adjacency-list package—it compiles tree traversals into a single, highly optimized SQL query and is pretty much the standard for deep nesting in Laravel.

imranbru's avatar

imranbru wrote a reply+100 XP

2w ago

A Laravel upgrade has absolutely zero impact on your Vue frontend. Since they only communicate via an API, you just run through the Laravel upgrade guide for the backend. As long as your API endpoints and JSON responses don't change their shape, your Vue app won't even know the backend was upgraded. It provides total isolation.

Upgrading is still mostly isolated, but they share the same repository. When upgrading to Laravel 13, you'll update your composer.json and fix any PHP-side breaking changes. Occasionally, a major Laravel release might include updates to default scaffolding, meaning you might need to bump an NPM package (like @inertiajs/vue3 or laravel-vite-plugin), but your actual .vue components and frontend logic almost never need to be touched.

imranbru's avatar

imranbru wrote a reply+100 XP

2w ago

I know you already switched to dompdf, but if you ever need modern CSS (flexbox/grid) back, here is the actual solution. The AI completely missed the root cause.

The specific error not a snap cgroup happens because recent Ubuntu versions on Forge install Chromium as a Snap package. Snap's strict AppArmor/cgroup confinement blocks it from being executed by daemon services like php-fpm. Running config:cache likely just triggered a worker/FPM restart that temporarily masked the issue before the cgroup restrictions clamped down again.

To fix it permanently, you need to ditch the Snap version entirely and install the native Google Chrome binary.

SSH into Forge and nuke the snap version:

sudo snap remove chromium

Install the native Google Chrome package:

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
sudo apt-get update
sudo apt-get install -y google-chrome-stable

Update your PDF generation:

return Pdf::view('PDF.invoice', compact('invoice'))
    ->format('a4')
    ->landscape()
    ->name("{$invoice->invoice_no}-{$invoice->created_at->format('Y-m-d')}.pdf")
    ->withBrowsershot(function ($browsershot) {
        $browsershot->setChromePath('/usr/bin/google-chrome-stable')
                    ->addChromiumArguments(['--no-sandbox', '--disable-setuid-sandbox']);
    })
    ->download();

Dompdf is fine for basic tables, but applying this fix takes 2 minutes and gets Spatie/Browsershot running flawlessly on Forge forever.

imranbru's avatar

imranbru liked a comment+100 XP

2w ago

Thanks for your reply! I'll try it out later when I get back, but it looks like that will get things sorted with Breeze/Vue.

The starter kit is another issue altogether. I posted those errors in a message above. I'd probably just stick with breeze, but I also want to keep in step with what Laravel is suggesting.

The new starter kit seems more involved for several reasons and I'd rather not get into shadcn. If necessary, I may have to upgrade the breeze to use tailwind 4 and stay with that.

Linux Mint has been excellent so far! But I'm still learning my way, so sometimes even simple things can seem confusing. :)

Thanks again for your help!

imranbru's avatar

imranbru was awarded Best Answer+1000 XP

2w ago

The reason your nested inclusion items.itemable.reference is failing is that the JSON:API manager needs the explicit resource class to traverse the inclusion tree. Dynamic string concatenation often prevents the manager from correctly mapping the subsequent reference relationship during the eager-loading phase.

The cleanest way to handle this is using a match expression or a mapping array to return the class constants directly. This ensures the inclusion context is preserved.

In your ItemResource, define the relationship like this:

public function toRelationships(Request $request): array
{
    return [
        'itemable' => match ($this->itemable_type) {
            'consumable' => ConsumableResource::class,
            'serialized' => SerializedResource::class,
            default => null,
        },
    ];
}

By returning the ::class constant, Laravel's JSON:API engine can immediately identify the target resource. It then looks at the toRelationships method of the resolved resource (e.g., ConsumableResource) to find the reference definition.

If you have many polymorphic types, you can refactor this into a protected property or a dedicated resolver method to keep it tidy:

protected array $itemableMap = [
    'consumable' => ConsumableResource::class,
    'serialized' => SerializedResource::class,
];

public function toRelationships(Request $request): array
{
    return [
        'itemable' => $this->itemableMap[$this->itemable_type] ?? null,
    ];
}

This approach is much more robust than string manipulation and fully supports nested includes like ?include=items.itemable.reference.

imranbru's avatar

imranbru liked a comment+100 XP

2w ago

I’m currently learning Domain-Driven Design (DDD) and trying to properly structure the Application Layer in a Laravel-based project.

I keep seeing different terms used, sometimes interchangeably, such as:

  • Use Case
  • Action
  • Application Service

From my understanding, they all seem to represent “application logic” that orchestrates domain behavior, but I’m struggling to clearly distinguish their responsibilities and when to use each one.

For example, in a typical authentication flow (like login or register):

  • Should I create a LoginUseCase, LoginAction, or LoginService?

  • Are these just naming conventions, or do they have real architectural differences?

  • Is it correct that:

    • A Use Case represents a single business interaction?
    • An Application Service coordinates domain objects and infrastructure?
    • An Action is just a Laravel-style way of structuring a single task?

Also, how do these concepts relate in practice:

  • Should a Use Case call an Application Service?
  • Or is an Application Service itself the Use Case?
  • Where do DTOs and domain services fit in this structure?

I’m looking for a clear explanation with practical examples (preferably in Laravel or PHP), especially from people who have applied DDD in real projects.

Thanks in advance!

imranbru's avatar

imranbru wrote a reply+100 XP

2w ago

Welcome to the Linux side! I made the jump to Mint years ago and never looked back. You'll quickly notice that file I/O is drastically faster natively than on Windows/WSL2, which makes composer and npm run like a dream.

Regarding DDEV, it's a fantastic tool, but Vite inside Docker always trips people up with HMR (Hot Module Replacement) websocket routing.

mkdir my-app && cd my-app
ddev config --project-type=laravel --docroot=public
ddev start

# Create the Laravel project inside the container in the current directory
ddev composer create laravel/laravel .

# Install Breeze
ddev composer require laravel/breeze --dev
ddev exec php artisan breeze:install blade

# Install node modules
ddev npm install

To fix your Vite issues: The container needs to expose the Vite dev server to your host, and the browser needs to know how to route the websocket connection back through DDEV. Update your vite.config.js to look like this:

import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';

export default defineConfig({
    plugins: [
        laravel({
            input: ['resources/css/app.css', 'resources/js/app.js'],
            refresh: true,
        }),
    ],
    server: {
        host: '0.0.0.0',
        port: 5173,
        strictPort: true,
        hmr: {
            // This routes HMR traffic perfectly through DDEV's router
            host: process.env.DDEV_HOSTNAME,
            protocol: 'wss'
        }
    }
});

Now just run ddev npm run dev. The wss protocol and DDEV_HOSTNAME will route the hot-reloads perfectly through DDEV's HTTPS proxy.

imranbru's avatar

imranbru wrote a reply+100 XP

2w ago

I am eagerly interested your this aim. hope You can start to create one. You are really unique one who is freely help us to make something for the best forum