simotion's avatar

simotion wrote a comment+100 XP

1w ago

Nice series!

You forgot cursor-pointer on the buttons ;)

simotion's avatar

simotion wrote a reply+100 XP

2mos ago

You're running the php artisan command in the NGINX container instead of the PHP container, that's why you getting the error message.

simotion's avatar

simotion wrote a reply+100 XP

2mos ago

Bit rusty on Docker, but I'll do my best to help.

Have you verified that the public/storage symlink exists inside the Nginx container, not just the PHP container?

Since you're using separate Nginx and PHP-FPM containers, php artisan storage:link runs in the PHP container — but Nginx is the one serving static files.

In your Nginx Dockerfile you only copy:

COPY --from=builder /var/www/public /var/www/public

That means:

The storage directory is not copied

The symlink created in the PHP container is not present

There is no shared volume between containers

So when the browser requests /storage/..., Nginx simply doesn’t have that path — hence the 404.

You have a few options:

Share the entire project directory as a Docker volume between Nginx and PHP

Build both containers from the same source layer so storage and the symlink exist in both

If these are static assets that ship with your app (like quote.svg), consider placing them directly in /public/img instead of storage

Right now, Nginx has no access to the storage directory, which is almost certainly the issue.

simotion's avatar

simotion wrote a reply+100 XP

2mos ago

I'm really curious about that function now!

simotion's avatar

simotion liked a comment+100 XP

2mos ago

I think an important reason could be that the idea is, that a Laravel application theoretically is supposed to be db independent.

simotion's avatar

simotion wrote a reply+100 XP

2mos ago

Not exactly a black and white answer to that. A week could be time well spend to fix technical debt before interest adds up.

Does the pending work require you to interact with that code? For example, does the code you have now contain service classes that you need to use in your pending work?

If so, take time to refactor, because delaying it will increase your technical debt.

If it's very loosely coupled, works and doesn't have an impact on your pending work; I would focus on finishing the MVP and refactor later on.

No code is ever perfect, and perfection is the enemy of good. Striving for perfection will delay your product and many projects have never been finished that way. Trust me, we've all been there.

Get your MVP ready. Once there's cashflow start refactoring.

My strategy while coding is:

  • Abstract where needed
    • form validation always in formrequest classes
    • business logic in service classes immediately
    • keep controllers (c)lean
    • build controllers for every endpoint instead of using closures inside your routes file

Even if the service classes or form request classes get messy, it's contained and you can optimize later on. Even if a model becomes a god class, it's contained to that model.

Just make sure you don't cram everything from validation to business logic inside the controller(s), because that gets messy real quick.

If a service class get's messy, it's easy to optimize later on.

Oh, en build tests from the get go. Your future self will thank you for it.

simotion's avatar

simotion started a new conversation+100 XP

2mos ago

There’s a lot of information out there about building a SaaS, but guidance on managing multiple SaaS products in a centralized and scalable way seems scarce—or maybe I’m just not using the right search terms.

In preparation for the public launch of my first SaaS application, I’m building a central management platform which I'm naming 'MCP' -a slight nod to one of my all-time favorite movies, TRON- to manage deployments, accounts, subscriptions, and billing in a centralized and scalable way. Right now, I’m focusing on the architecture of the communication between the MCP and the deployments of my application(s).

Here’s a high-level overview of how I envisioned it to work:

  • Application servers only know which features are available and which limits apply per account.

  • Subscription plans, pricing, addons, and billing are managed centrally in the MCP.

  • When an account upgrades or purchases a subscription/addon, the MCP sends the new set of features and limits to the relevant deployment via signed API requests (HMAC/JWT).

  • The deployment server temporarily caches the payload and sends a SYN-ACK-like message back to confirm receipt. Only after receiving an ACK from the MCP are the new features applied to the account. The MCP only sends an ACK if it initiated the request.

  • FQDN-based communication:

    • Deployments know the MCP’s FQDN and direct all requests/responses there.

    • The MCP knows the main FQDN of each deployment; responses are always sent to this known FQDN.

    • Requests arriving via other FQDNs or IPs (e.g., through a CNAME) are accepted, but the sender in the request header is never used for the response.

    • Authenticity is ensured via the SYN-ACK/ACK handshake between verified URLs.

  • Server-to-server only: All communication between MCP and deployments happens directly between the backends. No client-side code ever communicates with the MCP.

The idea is that, even if a key or secret is intercepted, an unauthorized party cannot simply grant an account full access to all features.

💡 My questions for you:

  • How do you manage multiple SaaS products and deployments? Have you centralized it, and if so, what strategy did you use?

  • What strategies do you use to secure server-to-server communication between deployments?

  • Are there best practices for additional verification, key management, or secure feature provisioning that I might be missing in my approach?

simotion's avatar

simotion liked a comment+100 XP

3mos ago

Great video

simotion's avatar

simotion wrote a reply+100 XP

4mos ago

I think I may have given the wrong impression by referring to the new model as “history” in my earlier posts.

It’s called ProductPrice rather than PricingLog or PricingHistory for a reason. It isn’t merely a log; it’s domain data. Instead of updating a single row in place, price changes are stored as new entries. This keeps historic prices in the database primarily for auditing purposes, but decoupling prices from the products table also enables future use cases the original architecture didn’t support.

For example, the model could support temporary prices (e.g., Black Friday deals) via a nullable valid_until column.

The part I’m struggling with is whether the current price should be derived from this model (making it the conceptual source of truth), or whether it’s acceptable to keep a denormalized current value on the product purely for performance reasons.

Keeping denormalized columns stores current prices in two places — on Product and on ProductPrice. It adds complexity to the codebase and isn’t as explicit as keeping prices solely in the prices table, in exchange for faster reads in product-heavy parts of the application.

At what point would you say the added complexity isn’t worth the performance gain? For instance, with temporary prices, a denormalized approach would require a scheduled task to detect expired prices and update the product table back to the last valid price, whereas deriving the current price from the ProductPrice model wouldn’t. I’ve also considered caching, but that feels like pushing domain logic into cache management.

Curious how others would draw that line.

simotion's avatar

simotion liked a comment+100 XP

4mos ago

Standards

simotion's avatar

simotion wrote a reply+100 XP

4mos ago

Yes, I would advice making a habit out of it. It makes your code robust, self documenting and makes your life easier during debugging and coding (autocomplete).

No need for docblocks if you have good attribute names with typehinting and typehinted the return types. Helps making your code easy to read.

simotion's avatar

simotion liked a comment+100 XP

4mos ago

The more specific and explicit you are the easier your life is. So, use types everywhere you can.

simotion's avatar

simotion wrote a reply+100 XP

4mos ago

To add some more context, this is how I currently persist pricing history from the Product model.

On creation I store the initial prices:

protected static function booted()
{
    static::created(function (Product $product) {
        ProductPrijs::add($product, PrijsType::INKOOP, $product->prijs_inkoop);
        ProductPrijs::add($product, PrijsType::VERKOOP, $product->prijs_verkoop);
        ProductPrijs::add($product, PrijsType::ADVIES, $product->prijs_advies);
    });
}

And when a price is updated, I append a new ProductPrijs record only if the value actually changed:

public function setPrijsVerkoopAttribute($value)
{
    $oldPrice = $this->getOriginal('prijs_verkoop');
    $newPrice = Price::fromFloat($value);

    $this->attributes['prijs_verkoop'] = $newPrice->getValue();

    if ($this->exists && $oldPrice && ! $oldPrice->equals($newPrice)) {
        ProductPrijs::add($this, PrijsType::VERKOOP, $newPrice);
    }
}

…and similar mutators for the other two price types.

This keeps the history append-only and avoids unnecessary writes.

I’m aware this introduces side effects (using a mutator), but this was a conscious trade-off for now. The main thing I’m trying to balance is keeping product reads fast (no joins or global scopes on heavy product listings) while still having a complete and reliable pricing history.

Curious how others would structure this — especially around where this responsibility should live (mutator vs observer vs service), and whether you’d keep the denormalized price columns in this scenario.

simotion's avatar

simotion started a new conversation+100 XP

4mos ago

I originally had three price columns on my products table: price_purchase, price_sell and price_advice.

To support pricing history, I introduced a ProductPrice model. The product_prices table contains: • id • product_id • price_type (purchase / sell / advice) • price_value • publisher_id • publisher_name (cached for when a user gets deleted) • timestamps

Whenever a product is created or updated, I check whether a price changed and, if so, I insert a new ProductPrice record. This part works well and gives me full pricing history per product.

The dilemma I’m facing now is where the single source of truth should live.

If I fully rely on the ProductPrice model for “current” prices, I need to load relational data for every product. Eager loading all prices caused a significant memory increase on product index and inventory management pages. Even when limiting this to only the most recent price per type (using scopes or subqueries), it quickly adds up to three extra lookups per product.

With a growing user base (±50 concurrent users) and fairly heavy product listings and inventory views, I’m worried about the long-term performance impact of always deriving the current price from the new table.

The alternative is to keep the original price columns on the products table and treat them as denormalized “current price” fields, while using product_prices purely for historical insight (charts, audits, etc.). This would avoid joins and additional queries entirely for most product-related logic.

However, this feels conceptually “wrong” to me, since the ProductPrice model then isn’t the single source of truth anymore.

I’m curious how others would approach this: • Would you denormalize and accept the duplication for performance? • Use subqueries or views to project current prices? • Or structure this differently altogether?

How would you optimize this without sacrificing performance?

simotion's avatar

simotion liked a comment+100 XP

4mos ago

Here's the source code this course!

https://github.com/laracasts/laravel-workshop

simotion's avatar

simotion wrote a comment+100 XP

5mos ago

I get what you're saying, Jeffrey.

To me, this aligns closely with an entrepreneur mindset. You approach a job like a project you truly want: you respond to opportunities that excite you, are motivated by the challenge, and tackle it creatively — willing to go all in, even at the risk of wasted effort.

For many people, the perspective is different. A job isn’t necessarily something they seek for challenge or excitement — it’s a necessity. They see themselves more as recipients of employment than as assets a business can invest in. They view a job opening as a spot to be filled, and it might as well be them — or not. That’s why some apply to 100 different positions: they aren’t looking for the role that excites them, or the workplace that fits their character, ambition, or values — they’re just looking for a job, often motivated mainly by salary.

This mindset is reflected in how people approach applications. How often do applicants ask about anything beyond salary or leave? How often do they inquire about culture, collaboration, or whether they can question a manager’s decision without being seen as insubordinate? Sometimes, people just want the job, and once they have it, they behave as expected — not necessarily as themselves.

This might explain why many job seekers focus on quantity over quality when applying.

And I do get that in some cases — if someone doesn’t have enough savings to comfortably live for six months to a year without being on a payroll, getting 'a job' may understandably take priority over getting 'the job'.

simotion's avatar

simotion wrote a comment+100 XP

5mos ago

@hardflip

My take on this, from experience:

I became self-employed in 2024. My very first software project was against a well-known tech giant. I had no previous clients, no professional software experience — so on paper, I didn’t stand a chance. I didn’t even have a website for my business yet (and still don't, since the project takes up all my time at the moment).

Still, I went all in.

On a Friday, I met the business owner, toured the company, and saw how their current (end-of-life) system worked. He allowed me to take screenshots. On the way home, I called the vendor to confirm the end-of-support date — and even negotiated a possible six-month extension.

That evening, I mapped their core processes and spent the weekend preparing a short presentation with mockups. I proposed a replacement for their old tool and introduced three new ideas to improve workflow and reduce Excel overhead. That weekend was eat, sleep, work, repeat — up at the keyboard, eating whatever my wife put in front of me, and doing it again the next day.

By Monday, I sent them a report containing a brief process analysis, a SWOT report (including the vendor extension as risk mitigation), and a side-by-side comparison of five of the current system's most used screens and my mockups.

Within 24 hours, they requested a meeting.

They were impressed about the amount of work I'd put into it over the weekend, but were concerned about me being a “one-man band.” I told them, “You’re currently working with a big company leaving you stranded. If I ever go out of business, I’ll open-source the code and help transfer server support, so you’ll never lose access.” That sealed their trust.

A week later, after submitting a 30-page proposal, I had already invested over 100 unpaid hours — but the next day, I got the call: I’d won the project.

Beating a tech giant on my first pitch wasn’t just luck. It was effort, willingness to put in hours, and tailoring to the client — exactly what Jeffrey’s talking about.

While the competition talked about themselves, I focused on the client, understanding their needs and demonstrating my approach in real time.

I know this wasn’t a traditional job interview, but is it really that different?

It works for employment too. Once, I applied for a network administrator role that required Cisco certification. I knew I had sufficient networking skills, but didn’t have the certification. I emailed a motivational letter and called them immediately after. They invited me for a meeting. I showed them I had already started studying for the certification and I could answer technical questions confidently. They did ask why I already started certification without knowing if I had the job, and told me it would've been possible to get certified on company cost. I told them I viewed it as an investment in my own skillset and value on the job market. Long story short: I got the job. Because of that.

simotion's avatar

simotion wrote a reply+100 XP

5mos ago

I’ve done this through middleware.

See the official documentation:

https://spatie.be/docs/laravel-permission/v6/basic-usage/teams-permissions