Member Since 1 Year Ago

Experience Points 3,230
Experience Level 1

1,770 experience to go until the next level!

In case you were wondering, you earn Laracasts experience when you:

  • Complete a lesson — 100pts
  • Create a forum thread — 50pts
  • Reply to a thread — 10pts
  • Leave a reply that is liked — 50pts
  • Receive a "Best Reply" award — 500pts
Lessons Completed 12
Best Reply Awards 0
Best Reply
  • Start Your Engines Achievement

    Start Your Engines

    Earned once you have completed your first Laracasts lesson.

  • First Thousand Achievement

    First Thousand

    Earned once you have earned your first 1000 experience points.

  • One Year Member Achievement

    One Year Member

    Earned when you have been with Laracasts for 1 year.

  • Two Year Member Achievement

    Two Year Member

    Earned when you have been with Laracasts for 2 years.

  • Three Year Member Achievement

    Three Year Member

    Earned when you have been with Laracasts for 3 years.

  • Four Year Member Achievement

    Four Year Member

    Earned when you have been with Laracasts for 4 years.

  • Five Year Member Achievement

    Five Year Member

    Earned when you have been with Laracasts for 5 years.

  • School In Session Achievement

    School In Session

    Earned when at least one Laracasts series has been fully completed.

  • Welcome To The Community Achievement

    Welcome To The Community

    Earned after your first post on the Laracasts forum.

  • Full Time Learner Achievement

    Full Time Learner

    Earned once 100 Laracasts lessons have been completed.

  • Pay It Forward Achievement

    Pay It Forward

    Earned once you receive your first "Best Reply" award on the Laracasts forum.

  • Subscriber Achievement


    Earned if you are a paying Laracasts subscriber.

  • Lifer Achievement


    Earned if you have a lifetime subscription to Laracasts.

  • Laracasts Evangelist Achievement

    Laracasts Evangelist

    Earned if you share a link to Laracasts on social media. Please email [email protected] with your username and post URL to be awarded this badge.

  • Chatty Cathy Achievement

    Chatty Cathy

    Earned once you have achieved 500 forum replies.

  • Laracasts Veteran Achievement

    Laracasts Veteran

    Earned once your experience points passes 100,000.

  • Ten Thousand Strong Achievement

    Ten Thousand Strong

    Earned once your experience points hits 10,000.

  • Laracasts Master Achievement

    Laracasts Master

    Earned once 1000 Laracasts lessons have been completed.

  • Laracasts Tutor Achievement

    Laracasts Tutor

    Earned once your "Best Reply" award count is 100 or more.

  • Laracasts Sensei Achievement

    Laracasts Sensei

    Earned once your experience points passes 1 million.

  • Top 50 Achievement

    Top 50

    Earned once your experience points ranks in the top 50 of all Laracasts users.

16 Jun
2 days ago

lambooni started a new conversation Model Observer Seems To Hang And Run Synchronously During Each Loop

I have the code:

$model->each(function (Model $model){
    dump('Gets here for every row of loop');
    dd('Never gets here??');

This is setup with a model observer on the AnotherModel class


And the observer class made using the artisan command.

When I run the code, my loop dumps out Gets here for every row of loop for every row of the loop, but never gets past the model create function. It is like the observer is hanging and the rest of the code is running synchronously...

15 Jun
3 days ago

lambooni left a reply on Amazon RDS Connection Over SSH Tunnel, Via Forge (no PEM Key File)

I found the issue. The ssh tunnel should be setup using the RDS endpoint rather than the IP address:

ssh -L 3307: [email protected]

The security rules on the VCS between the EC2 instance and the RDS instance should use the private IP. i.e. RDS inbound security group allow MySQL/Aurora from EC2 private IP.

lambooni left a reply on Amazon RDS Connection Over SSH Tunnel, Via Forge (no PEM Key File)

Thanks @aurawindsurfing . I actually have the EC2 instance connected to RDS using the private IP already. So my app works ok... I am just trying to connect to my development environment. So for this I believe the only way is to either SSH tunnel through the EC2 instance or open up the IP.

I can connect to RDS by opening up the IP address, but I would rather SSH tunnel. Unless there are any other ways?

lambooni left a reply on Connecting To MySQL AWS, Security Parameters?

@firemaps I realise this is really old now, but I am having similar issues. I can connect to the RDS server via the EC2 instance but cannot get a connection via SSH tunnels to my local GUI.

You mentioned the .pem file, however this is not provided when provisioned with forge.

Did you find a different way to get this?

lambooni started a new conversation Amazon RDS Connection Over SSH Tunnel, Via Forge (no PEM Key File)

I have provisioned an EC2 instance via forge. When doing this, you do not get the opportunity to download the PEM key. Instead, I have added my local key to the SSH section. I can now SSH into the EC2 instance with ssh [email protected] - all good.

I have a RDS instance setup within the same VPC. I have tested this and can:

  • Connect to MySQL via my EC2 instance in terminal. Using:
mysql -h -u USERNAME -p
  • Checked that the 3306 port is open via telnet.
telnet 3306

The above all seems to suggest that the RDS instance has a connection to the EC2 instance.

I now need to open an SSL tunnel from my laptop, to connect sequelpro. Due to not having the PEM file, I have opened an SSH tunnel using the command:

ssh -L 3307:FORGE-PRIVATE-IP:3306 [email protected]

This appears to work ok. However, when trying to connect to SQL via sequel pro with settings:


I get the following error:

Lost connection to MySQL server at 'reading initial communication packet', system error: 0

AWS support seem to think this is because the tunnel is not setup correctly, but I am unsure of any other ways.

11 Jun
1 week ago

lambooni started a new conversation How To Best Deal With MySQL Implicit Conversion Performance Hits Between Table Columns Of Different Data Types (i.e. Where Varchar = Bigint)

The majority of my tables where the ID visible to the user use a bigint ID. i.e. the orders table has an ID which is also used as the customer facing order number.

All of my other tables use a UUID primary key.

I have a few other tables that store information that relates to all the others. i.e. a stock_movements table will hold information on how stock has moved for a delivery, an order, a delivery exception (and so on) and therefore the related ID column can hold both bigint's and UUID's.

I have therefore set this column to a varchar but have not realised searching a varchar with a bigint has a huge MySQL performance hit. Just on a simple count query I have found this to make a difference from 400ms to 20ms, just by manually adding single quotes to the integer within the query.


// SLOW - 400ms

SELECT COUNT(id) FROM stock_movements WHERE entity_type = 'order' AND entity_id 123456

// FAST - 20ms

SELECT COUNT(id) FROM stock_movements WHERE entity_type = 'order' AND entity_id '123456'
  • entity_id being a varchar in the database

I wondered the best ways to deal with this? Options...

  • Start again and keep all ID's consistent (probably best but not practical at this stage).
  • Cast all ID's to strings on the model. I am not sure if this has any negative impact elsewhere.
  • Cast the ID's at query time. I think doing this with the query builder or laravel relationships may be tricky.
  • ?

lambooni left a reply on Slow Query When Using String Instead Of Integer

This is down to implicit conversions. The settlement-id on the amazon table is a bigint (as that is what this source provides) and on the master transactions it is a varchar (because there are multiple formats stored here).

Doing a wheresettlement-id(bigint) =settlement_id(varchar) is known as an implicit conversion and has a huge performance hit.

lambooni left a reply on Slow Query When Using String Instead Of Integer

@jlrdw I tried that also once I realised the default ->count() query builder function in laravel does this. It makes minimal difference on this query - a few ms.

However, changing from a string to an integer makes a several thousand percent improvement.

10 Jun
1 week ago

lambooni started a new conversation Slow Query When Using String Instead Of Integer

I have two almost identical queries.

select count(*) from `master_transactions` where `master_transactions`.`settlement_id` = '11839491331' and `master_transactions`.`settlement_id` is not null and `settlement_type` = 'amazon_settlement' and `sales_clearing_invoice_id` is null

select count(*) from `master_transactions` where `master_transactions`.`settlement_id` = 11839491331 and `master_transactions`.`settlement_id` is not null and `settlement_type` = 'amazon_settlement' and `sales_clearing_invoice_id` is null

The top one runs in 18ms... and the bottom one runs in 400ms. Why?!

This is all caused by adding or removing single quotes to the settlement_id column. The settlement_id column is a varchar 255 column within my database, and is indexed.

28 May
3 weeks ago

lambooni left a reply on Compiling Javascript Page Scripts

I guess I was asking for a bit of advice on how best to compile these in external files, for use within my views.

There are a lot of guides on using frameworks such as Vue, but not so much I can find on general javascript. Although I expect I am not searching for the right thing.

lambooni left a reply on Compiling Javascript Page Scripts

Thanks for this @bobbybouwmann. I am actually already compiling my JS like this.

I think my question maybe needs to be re-phrased.

This is more to do with the architecture of page scripts that are just using simple vanilla JS (ones that I have generally put near to the footer of each page they run on).

i.e. - for the print function below

<h1>Hello, world!</h1>
<button class="btn" onclick="print('all')">Print All Rows</button>
<table id="bootstrap-table"><table>

<script src="{{ mix('/js/app.js') }}"></script>
    var $table = $('#bootstrap-table');
    function print(type){
        // do other stuff with ES6 syntax

in the above example, I am getting data from a table which is on the page and then doing some other stuff with ES6 to pass this to make a request to a print server.

Moving the print function off into another file I see the following issues:

  • The $table variable is specific to this page only. In another file, It just appears from nowhere and does not have any context (but still works as intended!).
  • print is a very generic function. Once compiled this could easily conflict.

I have not worked with basic JS outside either Vue JS or putting it straight into the page it is being run on, so apologise if this is a very generic question!

lambooni started a new conversation Compiling Javascript Page Scripts

Most of my project uses Vue modules, that are compiled with mix/webpack and then included within the page.

I have a few areas that are still using jquery/vanialla and in these cases I generally include the page scripts within a blade template directly within @section('page-scripts').

I have come across an issue where some of the ES6 syntax is not being properly run on older browsers. The obvious fix for this is either change my code, or compile it.

My question is... if I am to compile, how is it best to include these scripts and functions that are currently only within the scope of a single page. As soon as I compile they will be included on all pages within my app.js

26 May
3 weeks ago

lambooni started a new conversation Caching Vs. 2nd Database Table For Fast Changing Data

I see a lot of guides using caching for things such as dashboards, where we are more than happy for the data to be maybe an hour or 2 old. I do not see many examples where caching is used where invalidation is critical when data is updated.

In my use case, I am doing stock counts against a delivery of items. There may be a few hundred items per delivery and as stock is scanned in the counts will increment. When I run a query, I must be able to ensure that I am getting the most up to date data, however when I view the whole lot at once I do not want a 10 second or greater page load to occur!

The source of the data is a stock history table which shows ever scan coming in/out etc. So to find out how much was on the delivery I would need to sum all of these individual event rows, which is slow. Currently to get around this, I have created another table which just holds the sums. When a scan is made the history table is updated and the sums table is updated with the most up to date tally.

My approach does work, but storing 2 instances of the same data does feel messy, especially when I have to have different queries/functions to retrieve data from each respective table. If I could just make the original sum query from the histories table, but use caching to save the result it could be cleaner... but what are your thoughts?


  • Must be able to guarantee that the cache is invalidated when the sum calculation changes for that item.
  • The data is fast changing during the delivery process, but ones its complete it should never change.
  • The data would only be accessed occasionally after the delivery process is complete.

So what would you say is better in this case - a caching & invalidating mechanism, or secondary "sums" table?

25 May
3 weeks ago

lambooni left a reply on Determine Source Of Slow Page Load - Fast Query On Its Own But Slow When Run As Part Of More Complex Operation. Blackfire Profiling Included.

@bobbybouwmann I am just surprised that this one query in particular can almost 3x the response time. The total response has about 100 queries and these 30 are accounting for that much. Especially when the raw query is so fast, it doesn't feel like it makes sense.

The usage case is for a REST API. I guess one way I could get around this is to not provide the lower level detail on the collection and only return it when requesting a single resource. I would really like to try and work this out though!

lambooni left a reply on Determine Source Of Slow Page Load - Fast Query On Its Own But Slow When Run As Part Of More Complex Operation. Blackfire Profiling Included.

Just to compare, I have profiled the single request running 30 times (looking up against 30 different ID's). There are a few additional queries being run through middleware etc. but the query which takes 9seconds to complete 29 times in the above profile, now completes just 11.5ms.

Blackfire profile

To give some further context, I am seeing this occur within an API resource, being called via the following code.

// Shipment package Resource - return array

'items' => ShipmentPackageItemResource::collection($this->PackageContents)

// Shipment package Item Resource - return array

return [
    'product_id' => $this->product_id,
    'sku' => $this->product->sku,
    'qty_packed' => $this->packed

// Package Contents Attribute

public function getPackageContentsAttribute() : Collection
    return $this->stockMovements()
        ->selectRaw('-SUM(picked) as packed, product_id')

Removing the 'items' => ShipmentPackageItemResource::collection($this->PackageContents) from the resource fixes the issue, so I know this is where it is occurring... but it makes no sense!

lambooni left a reply on Determine Source Of Slow Page Load - Fast Query On Its Own But Slow When Run As Part Of More Complex Operation. Blackfire Profiling Included.

Just testing that locally, removing the sum improves the result from 5.7 seconds to about 5.1 seconds... so it helps a little but not that much. The thing that baffles me is that running this query in Laravel on its own runs in 10ms.

I have an index on the product_id column but not a foreign key constraint. However, if I remove the with('product') part, the total runtime is not improved either.

I have tried increasing the sql memory buffer size.

lambooni left a reply on Determine Source Of Slow Page Load - Fast Query On Its Own But Slow When Run As Part Of More Complex Operation. Blackfire Profiling Included.

@bobbybouwmann - sure.

// Query

    ->selectRaw('-SUM(picked) as packed, product_id')

// Relationship

public function stockMovements() : HasMany
    return $this->hasMany(StockMovement::class, 'location_id');

I have tried removing the `with('product') also, but this does not help any.

lambooni left a reply on Multiple Indexes Slowing Down Query

@snapey I have accepted your answer as best for this question in relation to indexing. It was very interesting and helped out a lot with defining indexes for this. Unfortunately my problem still persists, but at least its not related (I don't think) to the table indexes now.

I have started a new thread for the subject change - link

lambooni started a new conversation Determine Source Of Slow Page Load - Fast Query On Its Own But Slow When Run As Part Of More Complex Operation. Blackfire Profiling Included.

I have a call to an endpoint that returns a paginated set of results (15 at a time). To get all the information needed, I am running between 60 - 100 queries. I realise this is a lot and I am optimizing this seperately, but I really want to get to the source of my issue!

Using laravel devbug bar, I can see all queries are fast apart from a single sum where. As part of the page load this take 200ms and runs 29 times. If I take the exact SQL query and run this directly in MySQL or as just a single query within Laravel, the run time is no more than 10ms.

I have profiled the request within BlackFire on a production server with plenty of free memory and CPU.

Blackfire Public Link

Run time: 9s Peak Memory: 1.68mb Queries: 98

The slow query is the same as the one as shown in debugbar

| Calls | I/O Wait | SQL                                                                                                                                                                      |
| 29x   | 9.07 s   | select ... from stock_movements where stock_movements.location_id = ? and stock_movements.location_id is not null and stock_movements.company_id = ? group by product_id |

What could be causing this query that takes only 10ms to run on its own, to now take over 9 seconds to run 29 times?

lambooni left a reply on Multiple Indexes Slowing Down Query

For this particular query, caching is will be tricky as it changes quite often and the result is key to logic decisions within the app.

Looking at laravel debug bar, I can see this set of queries (in its current state) is taking 6 seconds in total and using 7.26mb of memory (I checked this with the xdebug_peak_memory_usage() function too). It is running 65 queries in total and all of them are under 10ms apart from this one sum query which is showing as 200ms and runs about 15 times. The same query run on its own in Laravel or direct MySQL returns a result in less than 10ms.

lambooni left a reply on Multiple Indexes Slowing Down Query

My main question now is: why does running the query manually in MySQL produce a run time of 1ms, running the single query in Laravel completes in 10ms, but then running exactly the same query as part of an operation that has about 100 queries in total, increase the same query run time to 200ms?

lambooni left a reply on Multiple Indexes Slowing Down Query

Thanks @tray2. I have tried that but it does not have any impact on the query speed. Actually this andstock_movements.location_idis not null part of the query is added in automatically when using the query builder. I am not sure there is any way to remove it other than writing the query manually?

lambooni left a reply on Multiple Indexes Slowing Down Query

OK, I thought I finally got somewhere on this but now it seems there is something out.

I added a compound index to the query (in the correct order) and it went from 200ms down to 1ms. Also, the other queries were not effected - this is all testing single queries direct in MySQL.

Running the same query on its own through Laravel and the debug bar shows a run time of about 10ms, so still good.

However, if I now run the full operation (with multiple queries) this particular query remains unchanged and runs at 200ms! I have checked the raw query and the explain details and they are identical between between the 1ms/10ms versions and the 200ms version.

Does anyone have any idea what could be going on here? What would cause running a quick number of successive queries together result in the individual query running so much slower than when it is being run on its own?


select -SUM(picked) as packed, `product_id` from `stock_movements` where `stock_movements`.`location_id` = '700020240' and `stock_movements`.`location_id` is not null and `stock_movements`.`company_id` = '8b11050c-612c-4922-8b34-d04d579e02a9' group by `product_id`


| Metadata      |                                                        |
| id            | 1                                                      |
| select_type   | SIMPLE                                                 |
| table         | stock_movements                                        |
| partitions    | null                                                   |
| type          | ref                                                    |
| possible_keys | company_id_location_id_picked_index                                |
| key           | company_id_location_id_picked_index                                           |
| key_len       | 1167                                                   |
| ref           | const,const                                            |
| rows          | 1                                                      |
| filtered      | 100                                                    |
| Extra         | Using index condition; Using temporary; Using filesort |

lambooni left a reply on Multiple Indexes Slowing Down Query

The tables are multi-tennancy on the company ID column, so pretty much every query in my application will have a where company_id = ? on it. Do you mean there is a better way to subset this data outside of the final query?

lambooni left a reply on Multiple Indexes Slowing Down Query

@snapey thanks a lot for those video links. Really interesting.

One thing that he did point out - when you achieve a ref type of const within the explain query, that is as good as you are going to get! In my case, I can still seem to make considerable improvements past this by using the compound index.

Something that I have realise - my slow queries are actually sums (with where's) and I now realise that indexes do not necessarily help out these kinds of queries.

I have been trying out different indexes across the whole operation and found some interesting things:

My original slow query (with indexes) ran at about 500ms, which compounded across the whole operation to give the 12 seconds run time:

select -SUM(picked) as picked, `product_id` from `stock_movements` where `stock_movements`.`location_id` = '700020245' and `stock_movements`.`location_id` is not null and `stock_movements`.`company_id` = '8b11050c-612c-4922-8b34-d04d579e02a9' group by `product_id`

If I remove all table indexes (other than PK), this runs considerably faster at about 33ms and explain shows there are no possible keys. However the whole operation across all the other queries is slow at 10 seconds.

Adding on just a single index to my company_id speeds up the whole operation to about 6 seconds, but the sum query above is now 200ms. This sum query runs 15 times in the whole operation, so accounts for half of the run total run time. In this case it seems any form of single column index kills the speed on the sum query.

I can add a combination of compound primary keys and get this back down to 2 seconds again, but this also has detrimental effects on other queries.

My previous understanding was that indexes do not help sums, however I can see from the above that they CAN help sums, but it seems very hard/impossible to make this work alongside all the other requirements.

I guess the easiest fix is to store the sum result direct into the DB so the calculations are not being run every time (even though the values do change quite often).

24 May
3 weeks ago

lambooni left a reply on Multiple Indexes Slowing Down Query

Thanks. I'll check that video out.

@tray2 I am actually paginating also. This particular call has quite a lot of queries in it overall. So actually when I said it takes 6 seconds that is completing about 100 queries overall.

I realise that I need to optimise that separately but at the moment I am just trying to understand the indexing aspect. The fact is adding a certain set of indexes improves the performance considerably but then adding others pulls it all the way back again. I have other queries that need the other indexes and so would love to be able to use them all effectively.

lambooni left a reply on Multiple Indexes Slowing Down Query

So I actually started with 3 single index columns. This gave me a query time of around about 6 second which I wanted to improve on.

If I add the compound indexes specific for this query, the time reduced to around 2 seconds.

However I also had to remove the single column indexes at the same time. If I left them on there adding the compounds made no difference.

Once I added the single column Index back on with the compound, the query time is back up to 6 seconds!

lambooni started a new conversation Multiple Indexes Slowing Down Query

I am struggling to work out which columns are best to put my indexes on, when it seems adding additional indexes can have a detrimental effect on the query performance.

For example, I have the following query on a table with around 5m rows;

SELECT col1, col2 FROM table WHERE col1 = 'a' AND col2 = 'b' AND col3 = 'c';

Running this with no indexes takes 12 seconds!

I add the index table_a_b_c_index;

My query now drops down to 2 seconds - great!

I now have another query on the same table (with no indexes on any column):

SELECT col1, col2 FROM table WHERE col1 = 'a';

Running this on its own and the query takes 4 seconds - still pretty slow!

So now I add an index to col1 table_col1_index

My query reduces down to 0.2 seconds. This is great, however I now run the original query again and notice that it is using this index opposed to the one I specified earlier. The original query is now back up at 6 seconds.

I am unsure how to go about ensuring that both queries can be optimised at the same time.

12 May
1 month ago

lambooni left a reply on Dealing With Model Aggregate Sum/count For Large Tables.

Thanks for the feedback guys. A bit of context for this particular dataset:

  • It is mainly syncing to external API's for customers orders.
  • I want to be able to quickly compare the state of the orders in my database vs. the number of orders at the source.
  • All orders are gathered from the API via a paginated request. In some cases, we may drop a page or even just drop a few items within that page's results.
  • The orders may go back as far as 10 years but if any are dropped, we will not know from which point in time this occurred.

In our case, we gather data from about 10 different sources. Currently we have 10 tables (one per source) and then one master summary table. We could just have a single table to summarise all of these sources in one and I think it would really help to speed things up.

Currently we are not separate tenant databases, so in some cases our tables are 20 million rows, even though one customer only has a few hundred thousand. The count with a where is probably slowing things down here a little.

One way to make it run smother is to make sure you have the proper indexes in place and of course to make sure that you narrow down the potential target as much ass possible in your queries.

We do have all our where columns indexed and foreign keys setup where necessary.

When dealing with Income and Expense accounts, naturally there are monthly reports. These results can be saved. Then at year end you add up 12 summaries rather than all.

I really like the idea of summarising months or years into separate tables and summing these, although this is a more granulated version of a counts table, so I would still need to work out the same issues.

Another thing to think about is to have a history table where you can move everything older then a certain date. Matbe keep the last three months or some such and move everything older to the history table.

In our case I'm not sure a histories table would work, as we are pulling in data from other sources and its not necessarily in a chronological order. Once everything is synced this could work, but up to that point I'm not sure.

lambooni started a new conversation Dealing With Model Aggregate Sum/count For Large Tables.

Counts and sums are expensive operations when large tables are involved. I have several that are well over a million rows and performing these counts and sums on the fly is very slow.

I need the results in both the front end as information to the user, and in the back end for logic.

The common thinking on this is to save the results to a table, cache or other memory. Can anyone point me in the direction of the best way to get this setup, or if there is a package available? I think I need...

  • Required count rules/queries to be saved somewhere.
  • A way to execute the new count. This could be as simple as adding the eloquent count/sum after every bit of code that updates that table, but I feel there must be a better way. Would a model observer be a good use for this? Is there a way I can have a central observer for all models and not have to create something new for every time. Ideally I would like to be able to add the required count/rule to the table and for all this to happen automatically.
  • Generally the counts can be performed in a queue, but there may be times when this needs to happen synchronously.

Or maybe I am going completely the wrong direction on this and there is a better way?

Thanks a lot.

24 Apr
1 month ago

lambooni left a reply on Query Builder - Sum With GroupBy On Relationship

I have eloquent relations setup already, as shown in the original post. I am trying to get this to work with the relationship rather than with the join, but it seems the aggregate is messing it up in this case.

The relationship works as intended as I am using it elsewhere, but not in this query.

Is it maybe because I am aggregating the 'transactionType' relationship also, on the category column?

lambooni left a reply on Query Builder - Sum With GroupBy On Relationship

@jlrdw this gives the groupBy method on the collection... and it provides an array with my categories as keys and transactions nested within.

It kind of works, but ideally I do not want my results nested. So it kind of works but I want to get the groupBy in the query builder.

Simply moving this before the get fails, as does putting the groupBy inside the call back, as you are supposed to do with a group on a relationship column.

lambooni started a new conversation Query Builder - Sum With GroupBy On Relationship

I am looking for a query builder solution for the following:

Table: transaction_types

| id | type_hash | description     | category |
| 1  | abcd      | sale price      | sale     |
| 2  | dbac      | sale tax        | sale     |
| 3  | agft      | sale shipping   | sale     |
| 4  | pgsk      | refund price    | refund   |
| 5  | sa2r      | refund tax      | refund   |
| 6  | sdf4      | refund shipping | refund   |

Table: transactions

| id | type_hash | amount |
| 1  | abcd      | 12     |
| 2  | dbac      | 14     |
| 3  | agft      | 19     |
| 4  | pgsk      | -20    |
| 5  | sa2r      | -12    |
| 6  | sdf4      | -7     |

Relationship - transaction belongs to transaction type

public function transactionType() : BelongsTo
    return $this->belongsTo(TransactionType::class, 'type_hash', 'type_hash');

The result I am looking for on the transactions table is:

  • Amount aggregated sum(amount) as amount
  • Group transactions by TransactionType.category


| Results | transactionType.category | sum(amount)   |
| 1       | sale                     | 45            |
| 2       | refund                   | -39           |

I can get the following working, but ideally I want to do all the aggregation in the query builder, not in the collection:

Transaction::selectRaw('sum(amount) as amount')

I have tried the following (and variations of), but cannot get it working:

Transaction::selectRaw('sum(amount) as amount')
    ->with(['transactionType' => function($query){
19 Apr
1 month ago

lambooni started a new conversation Laravel Database Transactions Across Chained Queues (Horizon/Redis)

I have searched for hours on this and am thinking its maybe simply not possible? Is If a job in my queue chain fails, can I rollback all database transactions that have occurred to this point.

DB::transaction(function () {
       new OptimizePodcast,
       new ReleasePodcast

note: I know the above will not work when asynchronously pushing jobs to the queue, but is there a way to get this effect?

09 Apr
2 months ago

lambooni left a reply on Should Model Casting Force The Value?

@jlrdw I am not controlling the data that comes in as its from an external REST API. They provide a blank string for false and a (bool) true for true.

I was hoping that the laravel casts could force the blank string to a false without me having to manually force it prior.

08 Apr
2 months ago

lambooni started a new conversation Should Model Casting Force The Value?

I am using attribute casting on my model as follows:

    protected $casts = [
        'processed' => 'boolean',

I am getting data from an API which returns 1 and "" (empty string) for true and false. I would have expected after casting as the above, creating an entry results in a SQL error Incorrect integer value.

I therefore have to force the false values on each boolean field prior to creating, like:

           foreach($this->casts as $field => $cast){
                    if($cast === 'boolean'){
                        if(empty($data[$field])) $data[$field] = false;

Is this expected behaviour when using attribute casting?

04 Apr
2 months ago

lambooni started a new conversation Job Release And Duplicate Entries During Model Update

I have a strange issue related to releasing jobs and model update throwing duplicate entry errors. If I run the code with the release in, I get the error. Remove the release from the job and all runs fine.

My job is as follows:

    public function handle(LinnworksService $linnworksService)
        } catch(\Exception $e){

Generally a fail on the job happens due to an API connection error within the Linnworks API (throttle, server down etc.)

The script which fails creates the duplicate entry error (simplified):

$model = Model::where('source_id', $source_id)->where('source', $source)->first();
if(isset($model)) $model->update($data);

The very strange thing is that the duplicate entry is on the keys for which I am searching for table_source_id_source_unique;

As I say, I have fixed this by removing the release but I am curious as to why this may be occurring.

29 Mar
2 months ago

lambooni started a new conversation Multiple Implementations To An External Api - Best Approach. (contracts, Interfaces, Something Else?)

My scenario is as follows:

  • I am writing a system that can book shipments with various shipping companies.
  • There will a number of different couriers and "booking" will be via an external API with differing methods.
  • Fundamentally the inputs (package size, type weight) and outputs (shipment label, tracking number) should be the same, even though the way I get them will differ.

Is this a good scenario for contacts and interfaces? I usually see these used as a way to change system wide implementations (i.e. from file to database drivers) and am not sure if they are so suitable when switching between different concrete classes on the fly.

I want a way to ensure that I can add new shipping companies easily, without having to change any of the code that interacts with the various API's.

14 Mar
3 months ago

lambooni started a new conversation Run Multiple/batch Jobs With Transaction Rollback On Fail And Commit On Complete

My scenario is in relation to a large order that needs to be cancelled.

  • There is a customers order in my system with 200 order items.
  • The order cancellation triggers a calculation of stock and subsequent movement of stock for each line.

For a few lines of an order this operation can be done synchronously, no problem. When orders are large then the processing power required means that my best option it to split the process down into separate operations within a queue.


Run job



Jobs/CancelOrder.php handle()

$this->order->orderItems()->each(function(OrderItem $orderItem){
    CancelAndReallocateOrderItem::dispatch($orderItem, $this->user);

Jobs/CancelAndReallocateOrderItem.php handle()


For this order with 200 lines, 200 instances of CancelAndReallocateOrderItem will be started, allowing the cancellation to process in a couple of minutes.

I am not trying to work out if its possible to:

  1. If any of the child jobs throw an error, can I rollback the entire batch of transactions.
  2. When all child jobs have completed successfully, how can I run some some final code (i.e. send notification, change order status etc.)
24 Feb
3 months ago

lambooni left a reply on Dynamic Configuration In Service Provider

@snapey thanks for this.

So I can reference the user within the callback of the bind/singleton methods in the service provider. Thats great for getting each of the user configuration.

If I wanted to switch configuration independent of the user (i.e. if a user has 2 sets of credentials), is there any way to push new configs into the service provider? I realise service providers run first, before any of my logic, so what I am asking to do is update the service provider class configuration after it has already been instantiated and configured.

To give an example, on way I could do this is set a cache key with the required configuration and pull it back out in the service provider, but I expect this has flaws and is not best practice!

Service provider

        $this->app->bind(App\Library\MyClass::class, function () {
            return new App\Library\MyClass(\Cache::get('config'));

In this particular instance, this class must be configured on instantiation, so to re-configure I effectively need to overwrite/update the class that was bound to the container, within my logic.

23 Feb
3 months ago

lambooni left a reply on Dynamic Configuration In Service Provider

How would you suggest the best way to inject each user configuration from the database? Would I need to instantiate/extend the service provider before any logic is run?

lambooni started a new conversation Dynamic Configuration In Service Provider

How would I best dynamically update the configuration used in a service provider? for example, if I have an API key for user1, API key for user2 running through the same service provider to connect to an external API service?

for example:

   public function register()
        // Grab config
        $config = $this->app->config->get('xero.config');

        $this->app->bind('XeroPrivate', function () use ($config) {
            return new \XeroPHP\Application\PrivateApplication($config);


I effectively need to be able to switch out the values inside xero.config for different users. Ideally I would use the same service provider and just update the config file prior to instantiating the service provider, but guess I could also store credentials in the database if needed.

11 Feb
4 months ago

lambooni started a new conversation Laravel Horizon Long Wait Detection Setup - Not Able To Extend Wait Time

I am using Laravel 5.6 and Horizon 2.0. My Horizon setup is as follows:


    'waits' => [
        'redis:default' => 600,
        'redis:default_long' => 600,
        'redis:drip_feed' => 600,
        'redis:wms' => 600,
  • With my setup being connection_name:queue_name => time_seconds


    public function boot()

         Horizon::routeSlackNotificationsTo('', '#redis_queue');

Horizon however seems to be ignoring this and sending me notifications when any queue extends the default 60 seconds wait time.

I have fully rebooted horizon including:

  • clearing cache
  • clear and store config cach
  • php artisan horizon:terminate/reset/purge
  • restarted server
  • restarted daemon supervisord

Is there something else I need to do in order to get these settings to persist, or maybe something else in mysetup?

I did previously upgrade from Horizon 1 but did remove this first before installing 2. Im not sure if there could be some legacy code causing issues?

Any help much appreciated!

07 Feb
4 months ago

lambooni started a new conversation Notification Logging - Reads, Call Answers Etc. Within The Laravel Notification System.

What are peoples ideas on how to implement this best within the laravel notification ecosystem?

We currently send notifications via:

  • Slack
  • Telegram
  • Email
  • Nexmo (voice and SMS)

I would like to generically record these events into the database. Key information we are looking to record is:

  • When was the notification sent.
  • Was it read.
  • Was there any response (i.e. nexmo voice can generate some post requests back to the server for example, "user acknowledge the message", "user listened to but rejected the message".

I could ofcourse just create a notificationLogging model, but before I did this I wanted to see if there was anything that already existed? The Laravel database channel for example, but this seems more built for in-app notifications.

05 Feb
4 months ago

lambooni started a new conversation Horizon Not Starting Up After Deploy (unless Server Is Restarted)

This issue has only just started happening and as far as I know I have not adjusted the config for queues or horizon.


  • Forge (single site on server)
  • Horizon on redis default connection

I did have 2 versions of the site running on the same server, successfully for a while. I thought they may have started conflicting on the same connection so I deleted one. This did not help.

I have php artisan horizon:terminate in my deploy script, and this same script runs on a few other sites with horizon running correctly.

Once it has started (by resetting the server) everything runs normal. Its a bit frustrating that I need to do this though!

Can anyone suggest some pointers where to look to debug this?

25 Jan
4 months ago

lambooni started a new conversation Service Class & Container Best Practices

I have a structure as follows:

Model: XeroInvoice I use this to:

  • Interact with the database i.e. $xeroInvoice->syncInvoice()
  • Perform actions on the model. i.e. $orderInvoice->lockAndSend() changes a row in the DB and then starts a queue job which in turn communicates to Xero via an external API to lock and send the invoice.

Service Class: XeroService I use this to:

  • A set of functions that link the Xero API to my models. I.e. $xeroService->updateCreateContact(User $user) where user is my user model. In all cases I interact with Xero via my service class and the Xero API interacts with my models via the service class also.

Controller: XeroInvoiceController Not really used. I have one method that converts PDF data into a view.

My question is: I ideally want an API such as $orderInvoice->getPdfData. This method must go through my service class to interact with the Xero API and retrieve the data. Laravel does not allow me to inject my XeroService class into the model and I feel this is probably for a reason. I can use app()->make(XeroService::class); to get it, but this feels hacky.

What is the best way to go about setting this up?

17 Jan
5 months ago

lambooni left a reply on Forge/LetsEncrypt SSL Error "SSL_ERROR_NO_CYPHER_OVERLAP" TLS Version Incompatibility?

One potential fix around this is to edit the SSL cert configuration within:

sudo nano /etc/nginx/sites-enabled/

I amended the TLS version and cipher certificates to the intermediate settings as recommended by mozilla. It appears as the default Forge setup in now for Modern browsers only, which means many are invalid.

Re-running the test shows no errors with browser compatibility but has reduced the cipher strength level a little. So I guess more compatibility with a bit less security.

If anyone has any comments on a better way to do this, please advise.

lambooni started a new conversation Forge/LetsEncrypt SSL Error "SSL_ERROR_NO_CYPHER_OVERLAP"

Some users have complained that they cannot access our site due to a "secure connection failure" i.e. SSL. The actual error code provided is SSL_ERROR_NO_CYPHER_OVERLAP.

SSL has been installed via the forge LetsEncrypt button. I have tested 4 of our sites using and all seem to have the same errors show up.

Errors are only related to certain browsers/OS (generally older versions) i.e.

Chrome 49 / XP SP3  Server sent fatal alert: handshake_failure

I also noticed that only TLS 1.2 protocol is supported with my site, so I am guessing that these older browsers are using an older protocol.

| Protocols | | |----------------------------------------------|-----| | TLS 1.3 | No | | TLS 1.2 | Yes | | TLS 1.1 | No | | TLS 1.0 | No | | SSL 3 | No | | SSL 2 | No | | For TLS 1.3 tests, we only support RFC 8446. | |

Has anyone come across this issue before?

12 Jan
5 months ago

lambooni started a new conversation Mail Notification Templates - Custom Line()

I am using notifications to send mails as follows:

      return (new MailMessage)
            ->greeting('Hello ' . $notifiable->first_name . ',' )
            ->subject('Reset Password Notification')
            ->line('You are receiving this email because we received a password reset request for your account.')
            ->action('Reset Password',
                url(config('app.url').route('password.reset', $this->token, false))
            ->line('If you did not request a password reset, please just ignore this message')

This all works great, using the standard template.

I have now published the vendor files and have started to edit the template and CSS. To get my theme to work, when I create a new line, I need it to be wrapped by some HTML code.

i.e. ->line('This is my new line') needs to to render as (simplified):

This is my new line

For more complex emails I am creating them from scratch with the templates, but I really love notifications for these simple messages and hope there is a way to do this?