Member Since 5 Years Ago

Whitley Bay

Experience Points

3,760 experience to go until the next level!

In case you were wondering, you earn Laracasts experience when you:

  • Complete a lesson — 100pts
  • Create a forum thread — 50pts
  • Reply to a thread — 10pts
  • Leave a reply that is liked — 50pts
  • Receive a "Best Reply" award — 500pts
Lessons Completed
Best Reply Awards
Best Reply
  • start your engines Created with Sketch.

    Start Your Engines

    Earned once you have completed your first Laracasts lesson.

  • first-thousand Created with Sketch.

    First Thousand

    Earned once you have earned your first 1000 experience points.

  • 1-year Created with Sketch.

    One Year Member

    Earned when you have been with Laracasts for 1 year.

  • 2-years Created with Sketch.

    Two Year Member

    Earned when you have been with Laracasts for 2 years.

  • 3-years Created with Sketch.

    Three Year Member

    Earned when you have been with Laracasts for 3 years.

  • 4-years Created with Sketch.

    Four Year Member

    Earned when you have been with Laracasts for 4 years.

  • 5-years Created with Sketch.

    Five Year Member

    Earned when you have been with Laracasts for 5 years.

  • school-in-session Created with Sketch.

    School In Session

    Earned when at least one Laracasts series has been fully completed.

  • welcome-newcomer Created with Sketch.

    Welcome To The Community

    Earned after your first post on the Laracasts forum.

  • full-time-student Created with Sketch.

    Full Time Learner

    Earned once 100 Laracasts lessons have been completed.

  • pay-it-forward Created with Sketch.

    Pay It Forward

    Earned once you receive your first "Best Reply" award on the Laracasts forum.

  • subscriber Created with Sketch.


    Earned if you are a paying Laracasts subscriber.

  • lifer Created with Sketch.


    Earned if you have a lifetime subscription to Laracasts.

  • evangelist Created with Sketch.

    Laracasts Evangelist

    Earned if you share a link to Laracasts on social media. Please email [email protected] with your username and post URL to be awarded this badge.

  • chatty-cathy Created with Sketch.

    Chatty Cathy

    Earned once you have achieved 500 forum replies.

  • lara-veteran Created with Sketch.

    Laracasts Veteran

    Earned once your experience points passes 100,000.

  • 10k-strong Created with Sketch.

    Ten Thousand Strong

    Earned once your experience points hits 10,000.

  • lara-master Created with Sketch.

    Laracasts Master

    Earned once 1000 Laracasts lessons have been completed.

  • laracasts-tutor Created with Sketch.

    Laracasts Tutor

    Earned once your "Best Reply" award count is 100 or more.

  • laracasts-sensei Created with Sketch.

    Laracasts Sensei

    Earned once your experience points passes 1 million.

  • top-50 Created with Sketch.

    Top 50

    Earned once your experience points ranks in the top 50 of all Laracasts users.

Level 7
31,240 XP
4 weeks ago
Activity icon

Replied to Ability To Call External Service, With A Timeout, But Always Allow The Call To Complete

I think I'm looking for a back-end inter-process broadcast functionality. The controller sets off the async job, then waits for it to signal that it is complete through a broadcast event. It will wait for as long as it can, and report back that things are running slow if it runs out of time. The broadcast channel will need to be shared between all the containers or pods that may be running.

I can dispatch the job, then subscribe to a redis channel for that job to be notified when it starts and when it ends. I can't see a way to set a time limit on that subscription. I would like to subscribe for five seconds, or intil I receive the "complete" message indicating I can explicitly exit the subscription.

4 weeks ago
Activity icon

Started a new Conversation Ability To Call External Service, With A Timeout, But Always Allow The Call To Complete

Not sure about the title - there is a lot to squeeze into it.

This is the requirement I am trying to find a way to implement, and coming up with blockers whichever way:

I basically have an external service to call up to get some data. Requesting that data changes the state of that data, so once it is requested, I must hang around to get the result and store it. The service normally response in a fraction of a second, but occasionally it is very slow, but I still must wait for the response.

Now, I have a comtroller in the application that is used to request tat data. The controller must respond quickly. It should return either with the external service data, or a response to say "sorry, services is too slow, try again".

So from what I can see, my controller needs to dispatch a job that can fetch the remote data (with no overlap). That job can take as long as it needs to. My controller then needs to wait for that job to complete OR a timeout. How would I wait for that job to complete (with a timeout)?

I've looked at a mutex so the controller could aquire a lock, and the job could release it when complete. The controller can wait for a timeout period for the lock to be release; if it gets released, then the data fetched is ready to pick up from the database. If it times out, then the controller caller is told to try again later, by which time the results may be in.

The trouble is, the mutex for all packages I have looked at, stays locked for the current process only. If the controller aquires the mutex, then if it exits before the async job is finished, then the mutex is automatically released.

Is there any other general approach to solving this kind of problem? Basically: we want to run a job that will take some time to run. If it completes fast, then we return the result from that job. If it completes slowly, then we return a "try again later", but the job must be allowed to complete. This is all back-end processing.

Hope that makes sense!

1 month ago
Activity icon

Replied to Handling Blobs From The Database As Streams

Raising memory to 512MByte did the trick. The 64MByte geometrty took some time to go into the database - about 30 seconds to update - but I guess the database engine had a lot of indexing to do. It's an infrequent event, so does not need to be fast. Everything is running in containers, so we have total control of the environment.

tl;dr: streaming in large blobs was all nice in theory, but a decade-old bug in PHP PDO means it simply does not work, even if PDO is designed on the surface to support it. Throw some more memory at tge problem instead, and move on. Will revisit in 13 years time to see if the bug has been tackled.

Activity icon

Replied to Handling Blobs From The Database As Streams

Ah, someone else with my exact same problem:

Turns out it is possibly a 13 year old PDO bug, thoiugh that bug only refers to selecting blobs into a writable stream rather than updating a blob through a readable stream. The bug is that the stream is basically turned into a string in memory before being passed to the database engine. That would certainly do it! I also need to make sure I don't have emulation mode on (that mode builds queries in memory and expands the bindings, rather than passing the bindings to the database engine to put the query togather).

Activity icon

Replied to Handling Blobs From The Database As Streams

If this were just a file, then without a doubt it belongs on storage outside of the relational database, and I would always advocate that.

What makes this different, is that this blob is a geoigraphic geometry object. MySQL will parse it, index its contents, and then provide many useful geographic functions for finding, retrieving and manipulating that data. That is why it needs to be inside the database.

An increase in memory is probably going to be what I have to do.

Activity icon

Replied to Handling Blobs From The Database As Streams

Okay, the PDO object is easily fetched from the DB facade. I have tried this:

            $pdo = DB::getPdo();

            $statement = $pdo->prepare('update geographic_regions set geometry = ST_GeometryFromWKB(:wkb) where id = :id');

            $statement->bindParam(':wkb', $wkbStream, \PDO::PARAM_LOB);
            $statement->bindParam(':id', $id, \PDO::PARAM_INT);


where $id is the ID of the record I'm updating, and $wkbStream is a stream to the 25MByte WKB binary.

Still running out of memory at this point, which is bizarre, because any additional memory needed to update the geometry should be in the database engine, not the PHP environment, surely? Some of the smaller blobs go in okay, but not the larger ones.

Allowed memory size of 134217728 bytes exhausted (tried to allocate 62566040 bytes)

It's allocating a block of memory big enough to hold the whole of the WKB or geometry on execute, when it should not need to if it's streaming it.

1 month ago
Activity icon

Replied to Handling Blobs From The Database As Streams

It seems that PDO can bind BLOBs to streams using the PDO::PARAM_LOB option when binding. That's both for select and insert/update. So the underying database layer does support it. Now, in theory eloquent can be made to support it. If I can't see a way to do it, I'll just resort to plain PDO. Does eloquent expose the PDO connection to use directly?

Eloquent supports casting of columns to and from specific datatypes. Perhaps that is a way in to solving this. It's probably not though. I suspect get casting happens too far from the PDO data binding.

Activity icon

Replied to Handling Blobs From The Database As Streams

Chunking is great when handling multiple rows in chunks of rows (fetching a handful at a time). However, this a different probem.

What I have is a memory problem when fetching and writing a single row. These BLOBs can be 64MByte and some are bigger than that. They start in one database, and need to end up in another database, as well a going through a parsing and uwrapping process in the middle.

This is geographic data, and there is not much I can do about the size. It needs to go into MySQL so they can be handled with the OpenGEO functionality built into MySQL 8. The source database is nearly 4GByte in size, and it's amazing eloquent can read it while hardly breaking a sweat. I don't want to import the whole databaase, but just the geographical data for a throusand rows or thereabouts.

Activity icon

Started a new Conversation Handling Blobs From The Database As Streams

I'm wondering if Laravel query builder or eloquent supports this or not.

I am copying some very large blobs from an SQLite database to a MySQL database. The BLOBs need some conversion in PHP, and the thrown into MySQL as a geometry object.

At the moment I select the source BLOB - that's a chunk of memory. Then I convert teh BLOB (by taking off a wrapper layer it has) to another format BLOB. Then I need to push the result into a MySQL table. Each of these stages holds massive binary objects in memory and I quickly run out.

My PHP BLOB conversion works on streams, so that itself does not eat into memory. Now, is there also a way to use streams at each database end? Can I pull the BLOB from the SQLite database into a stream rather than into a variable in memory? Can I feed a stream into the MySQL column rather than passing it a massive binary object in a string?

I'm kind of looking for a big data pipeline, SQLite->PHP converter->MySQL using streams instead of memory.

Any ideas? I would like to avoid using CLI tools to export to temporay files if I can help it.

2 months ago
Activity icon

Commented on Take A Ride On The Laravel Pipeline

Trying to find this out myself too. Thinking about it, to skip the rest of the pipeline, possibly just return the item passing through the pipeline, rather than the pipeline enclosure itself.

Update: Not sure why I have never seen this mentioned anywhere, but it works. Wrote a simple test case to try it out in my project:

<?php declare(strict_types = 1);

namespace Tests\Unit;

use Closure;
use Tests\TestCase;
use Illuminate\Pipeline\Pipeline;

class Pipelines extends TestCase
    public function testEarlyExit()
        $item = 'foo';

        $result = app(Pipeline::class)
                function (string $item, Closure $next) {
                    $item = 'bar';

                    return $next($item);
                function (string $item, Closure $next) {
                    $item = 'baz';

                    // Break out of the pipeline just by returning the item rather than
                    // the result of the closure.

                    //return $next($item);
                    return $item;
                function (string $item, Closure $next) {
                    $item = 'biz';

                    return $next($item);
            ->then(function ($item) {
                return $item;

        // Will be 'baz' and not 'biz'.

        $this->assertSame('baz', $result);
2 months ago
Activity icon

Replied to Queue::assertPushed() Not Working

In case none of the replies here really cover it, the way the queues are handled changes between 5.6 and 5.8.

For 5.6, you could assert that the job has been pushed to the queue:

Queue::assertPushed(MyJob::class, function ($job) {
    // Check the details of the job here if you like,
    // returning false if anything is not right.
    return true;

For later versions of Laravel, and certainly from 5.8, the jobs are wrapped in a CallQueuedListener and so the closure is needed to check the job class:

Queue::assertPushed(\Illuminate\Events\CallQueuedListener::class, function ($job) {
    // Confirm the class of the job is correct.
    // Optionally do other checks on the job detail too.
    return $job->class === MyJob::class;

Hope that helps.

3 months ago
Activity icon

Replied to Search And Skip To Next Item In Collection

I used a reduce function to do this. It should work efficiently for small lists.

$nextItem = $collection->reduce(function ($carry, $item) {
    if (! is_bool($tiem)) {
        return $item;

    if (test-for-matching-record) {
        return true;

    if ($carry === true) {
        return $item;

    return false;
}, false);

$finalNextAfterMatchItem = is_bool($nextItem) ? null : $nextItem;

So a carry of false means it has not found a match, and a carry of true means it has found a match. The next item after true will be the item we want, so we set carry to the item at that point. After that, we pass that item down through the remaining iterations. There is no way to break out of the reduce() loop early once we have found that match, hence not wanting to use this technique for long lists.

Instead of the three-state true/false/item for carry, you could use a structure with a separate "found" flag and item value. That will save doing the expression at the end.

Update, just for when I come looking for the answer again:

$nextItemAfterMatchedItem = $collection->reduce(function ($carry, $item) {
    if ($carry->next !== null) {
        // Got it already.
        return $carry;

    if ($carry->found === true) {
        // Found the matching item on the last iteration.
        $carry->next = $item;

    if ($carry->found === false && test-for-matching-record) {
        // Found the matching item this iteration.
        $carry->found = true;

    return $carry;
}, (object)['found' => false, 'next' => null])->next;
3 months ago
Activity icon

Replied to Can I Choose Between Base Factories For The Same Model?

Ah, it's just occurred to me, $factory is a service container with global scope, much like what app() returns for the application. Unlike Laravel's service container, for which you can alias a class, the model class and the binding name in the $factory container are one and the same. So, for each model, there can only be one base factory constructor, since the model class name is also the unique container index. The "why" suddenly makes sense.

Activity icon

Replied to Can I Choose Between Base Factories For The Same Model?

Yes, sorry, I wasn't clear there.

The starting position for factories - which I am now very much aware of - is that each application being tested must have one, and only one, base factory for any model. There can be any number of extensions for that fatory (in the form of states) than can be chained together how you like, and defined where you like (with some naming convention, I guess, to prevent clashes there too). The test suite uses tools that have a global scope.

Thank you.

Activity icon

Replied to Can I Choose Between Base Factories For The Same Model?

Okay, I'm going to refactor. I will remove the factory in the package that does too much for my tests, and turn it into a state instead.

The base factory I will move to the main application.

The lesson learnt is that base factories for models should always be located in the package or application that owns that model, i.e. where the model is defined. Other packages that use the model can always extend that factory, but should not try to define it.

Luckily all these packages are in-house, so the real pain point is just having to get on with it :-) I think we just need to get down a few additional rules for project development.

Activity icon

Replied to Can I Choose Between Base Factories For The Same Model?

I need one factory for tests at the application level. Someone else has created another factory for tests in their dependent package. The two are clashing because Laravel picks them both up, and uses the one I don't want.

If I can't select which one is used in my tests, then I'm going to have to start ripping apart other packages and moving factories and creating states, and that's something I wanted to avoid. But maybe I can't avoid it.

Activity icon

Started a new Conversation Can I Choose Between Base Factories For The Same Model?

I am writing tests for a model in the application. There happens to be a factory for the same model in a package that has tests of its own, and that factory is being picked up and used in preference to the application-level factory.

So is there any way in my tests to say, "use this model factory, and not that model factory"?

I realise states can be applied to a model factory, but rather than overriding and undoing everything another factory has set up, I would like to start from the base model and build on that.

Another option will be to go pull apart the model factories in the package that is providing them, so that package offers a simple generic factory, and adds all its own values as states rather than in the monolithic way it does it now (which is kind of the problem I am having).

Any thoughts?

3 months ago
Activity icon

Replied to Job Dispatch Options - How Does This Actually Work?

Here's a handy tip when dispatching a job in Tinker:

Alternatively, assign the dispatch promise to a variable, then unset it.

5 months ago
Activity icon

Replied to Can I Easily Change The Emergency Logger Stream?

Solutions merged into laravel/laravel and laravel/framework.

If anyone else encounters this thread with a similar issue - look at the Pull Request linked above for more information.

5 months ago
Activity icon

Replied to Can I Easily Change The Emergency Logger Stream?

Hi Martin :-)

It's really about when the default channel that has been set does not work. For example, the daily channel will try to write to the local storage/logs/ directory. If that fails due to, say, lack of permission to write to that directory then Laravel will fall back to the emergency logger.

The emergency logger will then fall back to writing to storage/logs/ instead - whoops - no permission still. Other channels being used could also hit their own errors that stop them logging.

We've had a few containerised apps that did not have their default log channel changed from single or daily and the result was that logs failed to write, and were just disappearing into a black hole with no errors showing to tell us that it could not show errors.

The emergency logger writes to a monolog stream. That stream could be anything from a local filesystem, to stdout or stderr. For monolithic apps it makes sense for it to be a local directory. For containerised applications it makes more sense to use stderr (depending on how you are running your containers). So my thought is that somewhere to configure the stream name for the emergency logger to write to, would be great, rather than have it hard-coded in the core Laravel.

Once an application is up and running, and everything is configured and tested, you are very unlikely to ever see the emergency logger again. It's really about being able to set some context - e.g. monolithic/scalable container/single container with mounted filesystem - to make it easier to work through the initial configuration problems.

We will think about a PR for this. It's all hard-coded here:

5 months ago
Activity icon

Replied to Can I Easily Change The Emergency Logger Stream?

I think the answer is: no I can't, at least not without a PR to the framework allowing the fallback emergency logger stream to be set. I'll keep this question open for now, in case of any future developments.

6 months ago
Activity icon

Started a new Conversation Can I Easily Change The Emergency Logger Stream?

This has caused a bit of head scratching in a docker container, where nothing was being logged to the ELK stack, but I had no idea why.

So, if there is a configuration or other issue in the logger, Laravel will fall back to its emergency logger. That involves writing to storage/logs/laravel.log or storage/logs/lumen.log as a monolog stream. Running in a container, all our logs to go /dev/stdout and /dev/stderr. Logs that go to the filesystem are ephemeral, not monitored, and will be lost.

So I basically would like to change the emergency logger stream to /dev/stderr. Any inability to log to the current logging channels will write to that stream instead.

Is there a way to do this? Or is this pretty much hard-coded within Laravel?

It does look like the emergency logger is hard-coded into the Illuminate core, but maybe there is an event I can listen for that can tell me the emergency logger has been instantiated, and write my own logs to stderr? Looking for ideas and thoughts on how this could be done.