Member Since 5 Years Ago

Experience Points 22,885
Experience Level 5

2,115 experience to go until the next level!

In case you were wondering, you earn Laracasts experience when you:

  • Complete a lesson — 100pts
  • Create a forum thread — 50pts
  • Reply to a thread — 10pts
  • Leave a reply that is liked — 50pts
  • Receive a "Best Reply" award — 500pts
Lessons Completed 68
Best Reply Awards 21
Best Reply
  • start-engines Created with Sketch.

    Start Your Engines

    Earned once you have completed your first Laracasts lesson.

  • first-thousand Created with Sketch.

    First Thousand

    Earned once you have earned your first 1000 experience points.

  • 1-year Created with Sketch.

    One Year Member

    Earned when you have been with Laracasts for 1 year.

  • 2-years Created with Sketch.

    Two Year Member

    Earned when you have been with Laracasts for 2 years.

  • 3-years Created with Sketch.

    Three Year Member

    Earned when you have been with Laracasts for 3 years.

  • 4-years Created with Sketch.

    Four Year Member

    Earned when you have been with Laracasts for 4 years.

  • 5-years Created with Sketch.

    Five Year Member

    Earned when you have been with Laracasts for 5 years.

  • school-session Created with Sketch.

    School In Session

    Earned when at least one Laracasts series has been fully completed.

  • welcome-newcomer Created with Sketch.

    Welcome To The Community

    Earned after your first post on the Laracasts forum.

  • full-time-student Created with Sketch.

    Full Time Learner

    Earned once 100 Laracasts lessons have been completed.

  • pay-it-forward Created with Sketch.

    Pay It Forward

    Earned once you receive your first "Best Reply" award on the Laracasts forum.

  • subscriber-token Created with Sketch.


    Earned if you are a paying Laracasts subscriber.

  • lifer-token Created with Sketch.


    Earned if you have a lifetime subscription to Laracasts.

  • lara-evanghelist Created with Sketch.

    Laracasts Evangelist

    Earned if you share a link to Laracasts on social media. Please email [email protected] with your username and post URL to be awarded this badge.

  • chatty-cathy Created with Sketch.

    Chatty Cathy

    Earned once you have achieved 500 forum replies.

  • lara-veteran Created with Sketch.

    Laracasts Veteran

    Earned once your experience points passes 100,000.

  • 10k-strong Created with Sketch.

    Ten Thousand Strong

    Earned once your experience points hits 10,000.

  • lara-master Created with Sketch.

    Laracasts Master

    Earned once 1000 Laracasts lessons have been completed.

  • laracasts-tutor Created with Sketch.

    Laracasts Tutor

    Earned once your "Best Reply" award count is 100 or more.

  • laracasts-sensei Created with Sketch.

    Laracasts Sensei

    Earned once your experience points passes 1 million.

  • top-50 Created with Sketch.

    Top 50

    Earned once your experience points ranks in the top 50 of all Laracasts users.

29 Jul
1 month ago

gregrobson left a reply on Deleting Database Row Exactly 24 Hours After It's Creation Time

Taking @aurawindsurfing's idea a bit further. If these records are "expiring" (not necessarily user-deleted, but are only useful for a limited period), you might want to use expires_at instead of deleted_at. You can create the table specification to make a default timestamp for the column for expires_at.

// Example for PostgreSQL
Schema::table('log_entry', function (Blueprint $table) {
        ->default(DB::raw("now() + INTERVAL '1 day'"));

For large volumes of records (100,000s+) the solution given by @bobbybouwmann might lead to the database engine holding table locks for a long period. (I've experienced this with SQL Server and it is painful!) This code will just delete the oldest 100 entries.

$ids = DB::table('log_entry')

    ->whereIn('log_entry_id', $ids)

If you need to delete more, perhaps put a for loop in there with a sleep command so that it deletes 100 records every 5 seconds for 10 occurrences so you spread out the deletes across a minute. This has a couple of advantages:

  • If your load is "spiky" and you have times when lots of records expire, it gives your command some time to catch up, rather than it going for a large bulk delete.
  • If the command starts failing and is restarted it won't lock up the table when it tries to catch up with one mega-delete!
  • Table locks are never held very long and won't block any writing processes.

Search for "table nibbling" online to see other comments about this technique.

Of course if your table is only a few 100 rows, go with the simple solution! :D

28 Jul
1 month ago

gregrobson left a reply on Storing Date/Time As UTC, Problem With Daylight Savings Time

Hi @yougotnet,

I've read through this thread and I'm going to attempt to demonstrate with a worked example. (I've had the "pleasure" of working with a system that had to operate in the UK and across Australia and I know how it can cause some confusion to begin with)

Let's take work this through with an example based on France (because the maths is easier!) - France is one hour ahead of UTC normally (UTC+01:00) and **on the 31st March 2019 at 2:00am local time (2019-03-31 01:00:00) the offset changed from +01:00 to +02:00** and the local time jumps from 1:59:59am to 3:00:00am and France observes Daylight Saving Time.

  • A calendar event that starts at noon, 28th March (local time) the UTC value will be stored as **2019-03-28 11:00:00** (as we are one hour ahead at that local time).
  • A calendar event that starts at noon, 1st April (local time) the UTC value will be stored as **2019-04-01 10:00:00** (as we are two hours ahead at that local time).

Sidenote: In PostgreSQL, the timestamp and timestamptz column types are identical: timestamptz allows time zone functions to be applied, timestamp doesn't!

If we read both of those values out using @Cronix's code:

{{ $model->created_at->timezone('Europe/Paris') }}

...both will present correctly. PHP's datetime functions contains a database of all timezones and offset changes (not just for DST, but for political/government changes as well - sometimes countries switch to match their neighbours).

  • PHP sees **2019-03-28 11:00:00** and knows that 1 hour is to be added at that UTC time when you are in France.
  • PHP sees **2019-04-01 10:00:00** and knows that 2 hours are to be added at that UTC time when you are in France.

You can in fact check this list via $timezone->getTransitions() in PHP. New PHP versions often contain the changes from IANA database as nations change their rules.

Each transition lists the UTC time that the offset gets applied at, what the new offset is, whether that new offset indicates DST from this point until the next change and a localised abbreviation (in the UK that's GMT or BST - Greenwich Mean Time, or British Summer Time)

I don't know exactly what you mean by this comment...

The user isn't changing the timezone, it is done automatically in their browser because of daylight savings; so when the appointment is rendered in the browser, it changes it to its timezone.

...but I will say that any attempt to use Javascript's getTimezoneOffset() with rendering/setting on the client side is best avoided. An offset might be shared by multiple countries at the same time (typically north/south of each other) that don't obey the same transition timestamps and so you cannot conclusively use it to work out their timezone.


  • UTC is a single, always incrementing timestamp that carries no offset.
  • Timezones define offsets between UTC and local time: at any given UTC time you can apply the offset for that UTC value and timezone to get the local time for the same timezone.
  • When you observe those values doesn't matter, what they are does.
  • When storing to the database your Carbon instance should carry both timestamp and timezone for the conversion to set UTC correctly in your data store.
  • When retrieving, you should apply the appropriate timezone (e.g. User's if you are seeing a log of user activity. Destination airport timezone to see the time when a plane lands locally. Timezone of your present location when showing eta of your Uber ride.)

Hope this helps!

06 Aug
1 year ago

gregrobson left a reply on UUID As Primary Key Type?

Exports of databases should be unaffected - you can always restore with the same primary key values from when you backed up.

UUIDs can have advantages if you have a distributed system (IDs being generated across several separate machines/databases) and need a (near enough) guarantee of uniqueness across the whole system.

There are several drawbacks though.

  • It's 16 bytes instead of an integer's 4 or bigint's 8. That takes up more disk for the data and the indexes.
  • therefore requires more RAM to hold cached UUIDs in memory.
  • ...and any joins that involve UUIDs require more memory.
  • "UUID index fragmentation" ... as the UUIDs are not generated sequentially like integer primary keys the index will be messier and tend to suffer from more bloat.
  • If you want to sort by the order records were generated, sorting by primary key is the easiest if you use integers: UUIDs won't sort into a logical order as they generated at random.

Pinterest avoided UUIDs and used 64 bit integers in a customised way when they needed uniqueness against all their servers:

Takeaway: UUIDs have there uses, but they are typically rare :-)

20 Apr
1 year ago

gregrobson left a reply on Get Latest Item When Several Have The Same Created_at

Adding more decimals places to the seconds on your created_at storage type might help, but it's not guaranteed by the database. The database could still create two records with the same timestamp.

Identity columns are the only way to guarantee constantly increasing numbers.

Note that there are also (highly unlikely) edge cases to using timestamps - if tests are run when the system syncs the system time might (briefly) go backwards if it was previously running fast.

Personally, I would just use Actions::latest('id')->first() ... you also gain the advantage that the identity column is already indexed as it is a primary and unique key: using id means you won't need to index created_at if the table becomes large (in this usage instance anyway, other queries might require an index on created_at).

15 Apr
1 year ago

gregrobson left a reply on Using Queue Driver Sync For My Event Is Blocking If There's An Exception/error On The Event

The sync driver is generally only used for local development and testing. It uses the main PHP thread for execution: it's not waiting, it will will run the handler/job immediately.

All the other drivers will send the data to another storage medium (database, AWS SQS, Redis etc) without processing them: it's then up to another process (Laravel queue worker) to pick the jobs from the queue storage to process asynchronously.

27 Mar
1 year ago

gregrobson left a reply on How To Up Speed For Insert?

A few observations:

  1. In the original post your final method (building raw queries) will always be faster than the previous methods as Laravel will be doing fewer function calls to build the query and escape values.
  2. If you have indexes on the table and can afford to remove them during the import the DMBS won't be rebalancing the index as records are added. It's quicker for the index to be built in one go than update it on every insert.
  3. Running multiple threads as @martinbean suggested might be worth a try - but I'd hazard a guess that any table with an identity field or primary key is going to hit the same limit. At some point the database engine has to limit access to the primary key/identity to one process at a time.
  4. Batching inserts might help, for example:
INSERT INTO foo (column1, column2) VALUES ('val1', 'val2'), ('val3', 'val4), ('val5', 'val6');

It's a bit more work to compile the string: but Heap analytics really boosted their performance using this method (every query attracts an overhead):

Depending on the database engine, deferring constraint checking is also an option. Also you can try adjusting memory settings for queries.

Your on the right track though: I would benchmark different strategies with 100,000 records and see what works best for you.

25 Mar
1 year ago

gregrobson left a reply on How Can I Hide Admin Routes?

Github's approach is to throw a 404 if you try using the API to guess repositories that you don't have access do - which makes it harder for hackers to know if there is even something to attack.

A possible solution is to use a middleware to check route and authentication status. If they are accessing admin but does not have privileges (not logged in or not admin) return a 404 response.

gregrobson left a reply on Running An Update Statement Within A Model

You can call it using the 2nd method. As you mention, you're working on an instance of the model so $this will work.

21 Mar
1 year ago

gregrobson left a reply on Change Laravel Mail Driver At Run Time For Queued Jobs

How soon do you need this feature?

There's a pull request pending for the next version of Laravel (5.7) that would add multiple connection handlers.

20 Mar
1 year ago

gregrobson left a reply on Delete A Row WITHOUT Cascade

Glad you found the issue!

gregrobson left a reply on Delete A Row WITHOUT Cascade

@priverop - looking at your constraint

`bbdd`.`parts`, CONSTRAINT `parts_presupuesto_id_foreign` FOREIGN KEY (`presupuesto_id`) REFERENCES `presupuestos` (`id)

It's set up so that a presupuesto has many parts - deleting the part should have no effect on the part - are there any other key constraints in effect?

19 Mar
1 year ago

gregrobson left a reply on Delete A Row WITHOUT Cascade

That's not going to help with data integrity @Snapey!

If I remember correctly the default action for a constraint if you don't specify onDelete() is NO ACTION or RESTRICT.

@priverop - the onDelete() – and `onUpdate for that matter – have multiple options I'll summarise the the MySQL docs here (most other DB engines are the same FYI):

Taking the "posts and comments" example. ON UPDATE/DELETE...

  • CASCADE - If the post.idchanges, change the comment.post_id. If the post is deleted, cascade and delete the related comments
  • SET NULL - If you change or delete the records that were related have their comment.post_id set to NULL. In this case deleting a post leaves comments "orphaned". A comment might no belong to any post, but would still exist. [1]
  • NO ACTION / RESTRICT - Mostly the same 90% of the time they are used. Deleting the post isn't allowed until all comments have been removed before you remove the post. [2]

[1] Of course this won't work if the column has a NOT NULL constraint set on it!

[2] RESTRICT is the default for MySQL

At the moment you have

on table `bbdd`.`parts`
CONSTRAINT `parts_presupuesto_id_foreign` FOREIGN KEY (`presupuesto_id`) REFERENCES `presupuestos` (`id`)

If you wanted to keep the records when deleting the presupuestos your foreign key should be supplied as follows:

$table->foreign('presupuesto_id')->references('id')->on('presupuestos')->onDelete('SET NULL');

Foreign keys are a great way to validate data being added, but also can be very helpful with managing business logic. e.g. if an entry in a package table belonged to a delivery_vehicle - you wouldn't want to delete the package if you removed the delivery_vehicle! In this case SET NULL would be best: then the package is shown as belonging to no delivery vehicle.

Hope that helps.

16 Mar
1 year ago

gregrobson left a reply on Why Do People Use DB Instead Of Model?

@cmdobueno - I must admit in my early days I was more likely to use DB::insert() than the eloquent equivalent! Eloquent is a large part of Laravel which might put people off.

You can build incrementally though, so it's not like you have to tackle everything it does on day 1.

FWIW: I query the Laravel docs on a daily basis, despite being really familiar with it!

15 Mar
1 year ago

gregrobson left a reply on How Eloquent Determines The Foreign Key Of The Relationship?

The key bit is in the guessBelongsToRelation() code (same file).

protected function guessBelongsToRelation()
    list($one, $two, $caller) = debug_backtrace(DEBUG_BACKTRACE_IGNORE_ARGS, 3);
    return $caller['function'];

This gets the chain of calls that led to guessBelongsToRelation()

  • $one['function'] = guessBelongsToRelation
  • $two['function'] = belongsTo
  • $three['function'] = post

Think of it as PHP using breadcrumbs to find it's way backwards through the call stack! Clever stuff!

gregrobson left a reply on Why Do People Use DB Instead Of Model?

Going back to @cmdobueno's original question... (emphasis mine)

I see DB being used to insert, update, and query, then tons of useless joins... why would one not just use a model and relations to do things properly? Even further it seems to so many people in their response/suggestions also use DB instead of explaining that instead of all these intensive joins, why not just use the power of releations?

tl;dr Laravel's relations are not equivalent to joins (yet represent the same concept).

This next bit is all from memory, hopefully I have made no errors!

Taking a simple eloquent example, when you do Post::find(123)->comments(); you actually get two queries:

SELECT * FROM posts WHERE id = 123;
SELECT * FROM comments WHERE post_id = 123;

If post_id is indexed, it's not really any different to:

FROM posts
INNER JOIN comments
    ON = comments.post_id;
WHERE = 123

When your eloquent queries use increasing numbers of related models and more fields you'll find a lot more SELECT queries. This is fine... until the row counts start to rise or indexes are no longer adequate.

Running separate queries has another weakness - each query takes place individually, at a different time, with different underlying data - if there's a 200ms gap between two SELECT statements the second query might not retrieve data that aligns with the first. (A comment without a post for example.) A single query will represent a consistent (The 'C' of ACID) snapshot of the database's state. If you have foreign key between post and comment then every comment will have a post.

When you submit a query a lot of optimisation goes on under the hood:

  • "Plan caches" are opened and resources are allocated to each query
  • JOIN AND WHERE statements are assessed - the best candidates to reduce the data to be searched are worked out (e.g. a single value from a unique field like customer_id is a good start, searching where 'order' = 'complete' when 90% of the orders are complete is not good).
  • The DBMS is tracking what you search for frequently and will be keeping the best data in RAM. Repeatedly SELECTing records on a table-by-table basis may not help it.

Every time you hit the database with a query there's also a minimum round-trip as well and general housekeeping before and after the query.

90% of the time this isn't a concern: the app either has a low user count, small amount of data or low usage. If your data is sensibly indexed you'll be fine. If you're migrating data - that might be a headache, but if it can be done over a weekend, I'm all for letting something run for a couple of hours if it's easier to understand the code doing the migration.

Manual Querying

I might use DB::select() when:

  • I need syntax that Laravel doesn't support like Common Table Expressions - if you need to query data that's recursive (WHERE = comment.parent_id) in a hierarchical structure you will reduce queries by 90% typically!
  • Lots of joins/clauses/sub-query references. Sure, you can do it with the query builder - however if the query builder PHP code is hitting 20+ lines with lots of logic there's a real danger that either A) I will loose track of what I'm doing B) The query builder might not handle my code properly. A SQL statement is just that a statement - a fact that the DBMS will obey.
  • I need to leverage an optimisation from the DBMS. There are some weird edge cases with complex queries - some times you'll find a clause like WHERE (x = 1 AND y = 2) will make the DBMS use an special index on x and y, but WHERE ((x = 1) AND (y = 2)) doesn't invoke the index. Your query time might become 1000x slower as a result. (This a rarity, I'm thinking of people at the 100GB+ mark!)

When things get RAW

Sometimes there is a need to hugely optimize. Interesting case from (analytics) who deal with millions of INSERTS. At their scale doing this (not in PHP, but same scenario):

DB::insert("insert into users (id, name) values (1, 'Jeff')");
DB::insert("insert into users (id, name) values (2, 'Taylor')");
DB::insert("insert into users (id, name) values (3, 'Adam')");

was killing performance as the DB was doing stuff under the hood on EVERY INSERT :o

Instead they did the equivalent of this (buffering 50 inserts at a time and doing 1 INSERT of 50 individual queries):

insert into users (id, name) values (1, 'Jeff'), (2, 'Taylor'), (3, 'Adam');

The latency charts in the article speak for themselves!

Hopefully that helps explain the pros/cons! I would wager that 90% of the people, 90% of the time will never need to shift from eloquent. As I have indicated, only issues of scale, complexity, brevity and performance might require you to execute your queries in a more manual way.

06 Mar
1 year ago

gregrobson left a reply on Help A Developer Stuck In 2000

Build a very simple form - it just has an email address field. Get it store that in a database table when you submit the form (don't bother using migrations yet).

...then get it to validate the email address (without Form Request Validation)

...then perhaps put that validation in a Form Request class.

...then get invalid email addresses to flash an error on the page when the email address is wrong, or no longer unique in the database.

...perhaps add a second field with different validation next.

The documentation is thorough and although I've read through it many times to get a sense of what's available: you'll never keep it all in the front of your mind. I've recently started to do a lot of work with event listeners, jobs and queues and I've hammered those pages on the documentation this past week.

If you build incrementally you'll have results more frequently and will be able to see your progress better.

gregrobson left a reply on Asynchronously Update A Model From A 'saved' Event

I know this is an old post, but this has helped me as I want to the same. In the end I've done the following in the job: it might help somebody else reading this?

My solution was to check the dirty state in the constructor. In this case $requiresSync is serialized as an attribute of the job class and is deserialized when the job is pulled off the queue.

Not graceful, but pretty clear.

public function __construct(Product $product)
    $this->product = $product;
    $this->requiresSync = $product->isDirty('title', 'sku', 'price');

public function handle()
    if (! $this->requiresSync) {

    // Actions to handle
28 Feb
1 year ago

gregrobson left a reply on Guzzle And Problem In Port:80

On second thoughts - you might just have a firewall issue. Can you access a simple test page on that port?

gregrobson left a reply on Guzzle And Problem In Port:80

OMG! I had almost the exact same problem with using Pipedrive's API today! Strange coincidence!

In my case URLs such as would end up with the scheme being deals and not https. I was only able to work this out as I had a middleware and could check the $request object before it was sent out.

In my case the : being used after the end of the domain name is non-standard and not part of the URI specification. I would have thought it would have worked after the hostname though?

If you can attach a middleware to your Guzzle client then I would try doing a dd() on the request and check the scheme, domain, path and query string properties as they are being sent out.

There are some closed issues relating to this on the Guzzle issue board - they seem to hint at being a PHP issue with how it parses strings.

25 Feb
1 year ago

gregrobson left a reply on Speed Up Query To Large Database Using Eloquent.

To add to @MikeHopley - an example of a paginating API is pipedrive.

You will typically have a maximum number or results per request and the API will advise on where the next batch can start. That's normally just an offset, but some APIs will give a timestamp if the data is temporal in nature (e.g. server logs)

12 Feb
1 year ago

gregrobson left a reply on Large Scale Project Design

Build a very small section of your large app first. Learn from what works (and doesn't) then incrementally add more features (and more complex features).

If you try building it all out from the start you are likely to get half way through and realise something can be done in a better way and will want to rewrite the half you have done!

@martinbean - I'm inclined to agree with @CJJ - when you're coming to a framework for the first time with a large project, I would always favour an agile/incremental approach instead of architecting the whole application with limited knowledge/experience of its capabilities/application to the task. (I know through bitter experience personally!).

@ochirach - have a look at some of the tutorials, start with getting logged in/logged out and a few bits of routing first - possible forms and validation would be a good step 2! The community is always here to help if you get stuck :)

09 Feb
1 year ago

gregrobson left a reply on More Series About NUXT, VueX, Vux....


Somebody pointed me towards Nuxt.js for a new project. It looks very promising and keeps the boilerplate requirements low for projects. A series about this would be great.

05 Feb
1 year ago

gregrobson left a reply on PHPStrom Vs Sublime

Personally I only work on small to medium sized projects where I tend to be one of only a small number of developers. I prefer Sublime's super fast speed and use a few select plugins that are stable and boost my productivity.

I could see that in heavy enterprise environments with many developers the ability of PHPStorm would be beneficial when you're navigating a large codebase (many namespaces, larger methods, refactoring work etc).

As part of your learning invest time in learning:

  • the keyboard shortcuts
  • the command palette
  • the goto bar (or equivalent in your app of choice)

Drill those features into your muscle memory! I can switch between different views, move files between views, jump to points on the page ... all without the mouse (like a Vim user!)

Typically I have no menu bar visible, just my two code editing panes. No minimap. No line numbers (they are in the status bar and I have CTRL+G to jump to a line).

Works pretty good for me.

26 Jan
1 year ago

gregrobson left a reply on Wordpress On Forge Upstream Sent Too Big Header While Reading Response Header From Upstream

Thanks @eddieace - had a production server that wasn't playing ball with our payment provider. Other sites didn't say to add in the PHP section! :-D

22 Jan
1 year ago

gregrobson left a reply on Cloud Based Database

An important factor for me is the RTO and RPO.

How much data could you afford to loose? How quickly would you need a database to be running again?

For small, low-usage applications I've normally just set up a script and that exports, zips and sends the backup of the database to Amazon S3 for off site recovery. Loosing an hour of work wouldn't be the end of the world for users, I could get things running again in a couple of hours.

If you're system is mission critical, then Amazon's RDS is good if you lack the skills to manage replication (or don't want the stress!): you can have your database failover to a hot-spare should the primary fail. (AWS will spin up a new replica when it detects failure). This does mean paying a bit extra for two database instances, but gives you peace of mind.

Other factors include capacity, growth and typically workload. If you expect lots of growth, then being able to scale up on-demand (rather than buy a really expensive server at the start) is really advantageous in the cloud. If the database is only busy for short periods of time, you might be able to save a lot of money by only increasing capacity for the occasional times you need it rather than over-provisioning.

gregrobson left a reply on Cannot Run Sudo Commands?

Thanks for the advice @ejdelmonico - I think I might just get the provider to reinstall the VM: I don't have enough access to the host to boot into recovery mode. If that's not working then other parts of the install might be broken that I discover later!

Useful commands though, should I need them in the future!

19 Jan
1 year ago

gregrobson left a reply on Cannot Run Sudo Commands?

That’s not possible - it’s a custom VPS on another host.

Incidentally another Custom VPS on the same host machine has Longview working fine.

gregrobson left a reply on Cannot Run Sudo Commands?

Further info: if I run id at the terminal I get this back:

uid=1002(forge) gid=1002(forge) groups=1002(forge),27(sudo),33(www-data)

gregrobson started a new conversation Cannot Run Sudo Commands?

I've just tried to install Linode's Longview on a new Custom VPS set up through Forge. (I've done about 10 times before without issue.) I attempted to run the command…

curl -s<PROVIDED KEY> | sudo bash

…and I get…

Sorry, user forge is not allowed to execute '/bin/bash' as root on <my host name>

I've tried sudo bash on it's own and that doesn't work either. I seem to have lost my sudo/root powers.

Does anyone have an idea A) why this has happened B) whether I can recover my root access, without having root access?*

  • I realise the answer is probably no on that front!
11 Jan
1 year ago

gregrobson left a reply on Installing Custom VPS Wont Work

If it's a custom VPS do check your firewall is allowing the server is able to contact forge on port 80 (HTTP) and 443 (HTTPS).

When the script runs Forge will show "provisioning" until the script completes and does the callback (with sudo password and other key stuff) on script completion.

(I only know this as I'm setting up a custom VPS today and it got stuck... guess who didn't open the firewall!)

06 Jan
1 year ago

gregrobson left a reply on Get Retrieved Value Of Database As Single Input

What version of Laravel are you using? In Laravel 5.1 pluck() was renamed to value().

// 5.0 and earlier
$userPhoneNumber = DB::table('users')->where('name', 'John')->pluck('phonenumber');

// 5.1 onwards
$userPhoneNumber = DB::table('users')->where('name', 'John')->value('phonenumber');

28 Dec
1 year ago

gregrobson left a reply on Is There A Way To Filter User Within A Distance Using Eloquent?

The correct approach would be to use MySQL’s Geospatial data types. All of the calculations above can be done in the database.

Once you have the right data types (geometries) you can use functions like ST_Distance_Sphere() and query for points that are within “x distance” of the origin user.

There are some good examples here

20 Dec
1 year ago

gregrobson left a reply on Config::set() Database Connection Not Persisting.

@chillcaw You might need to force a reconnect for the new config setting to be used.

19 Dec
1 year ago

gregrobson left a reply on Ask About Laravel

Absolutely :-)

Check your /config/database.php file and look at the connections section. Although the example is set up for one connection per database system, you can have multiple. For example you could have two Postgres connections

'pgsql_clients' => [
    'driver' => 'pgsql',
    'host' => env('DB_HOST', ''),
    'port' => env('DB_PORT', '5432'),
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
    'charset' => 'utf8',
    'prefix' => '',
    'schema' => 'public',
    'sslmode' => 'prefer',

'pgsql_crm' => [
    'driver' => 'pgsql',
    'host' => env('DBCRM_HOST', ''),
    'port' => env('DBCRM_PORT', '5432'),
    'database' => env('DBCRM_DATABASE', 'forge'),
    'username' => env('DBCRM_USERNAME', 'forge'),
    'password' => env('DBCRM_PASSWORD', ''),
    'charset' => 'utf8',
    'prefix' => '',
    'schema' => 'public',
    'sslmode' => 'prefer',

Any connection will default to the one set at the top of the file.

All queries run against the default database connection unless specified otherwise: - see the database connection section.

18 Dec
1 year ago

gregrobson left a reply on Any Way To Refresh Database Connection In Code?

DB::reconnect(); // reconnect the default connection
DB::connection('other')->reconnect(); // Any non-default connection according to your config

I would investigate the underlying issue further though... is it an issue with the database server or Apache, can you isolate a particular cause? Reconnecting is only a sticking plaster.

gregrobson left a reply on Sending Mail On AWS

@itstrueimryan Good question: I don't know! I've always let someone else handle the details.

This guide is for Debian, I imagine most other distros have some similar guide for setting up DKIM.

gregrobson left a reply on Sending Mail On AWS

An EC2 instance is just a server - if it has sendmail running on it then yes you should be able to send mail from the EC2 box. Linux boxes tend to have that by default I believe.

If you wish to use SES then do make sure your domain is verified. As it's very easy to fake a sender's domain receiving servers like to verify the source of emails is legitimate. Typically this involves adding a TXT (text) record to the sending domain.

Email services (SES, SendGrid, SparkPost) might give you very conservative limits initially. Large amounts of emails originating from new domains/IP addresses can trigger receiving spam filters to think that servers are spamming. Mail services like to "warm up" new domains/IPs by increasing the mail volume slowly to establish a good reputation. If you do all the verification, you can normally increase the volume quite quickly (within a few days/weeks). It's nothing against you! :-)

16 Dec
1 year ago

gregrobson left a reply on Search On The Last Two Months Records

Yes, that would be the way to do it.

If you have 1,000,000 products then you definitely need an index on the date/time column you are doing your where() clause against.

gregrobson left a reply on Is There A Way To Save Data In Database After Using Migrate:rollback Or Reset?

Hmmm, rolling back and reset is definitely not the way to go!

I think I know what you're talking about - you're worried that if a new feature doesn't work, you might have new records added, but of course you can't rollback, as that might delete tables that were added when the feature was introduced.

Some tips:

  1. If you can, use a test/staging server. Get a backup of the production database restored to the test setup - you can use this to make sure that migrations run correctly. (Table creation typically will, but if you're adding foreign keys this might work locally, but not in production).
  2. Perhaps introduce schema changes in a separate release to feature changes, so you know that the schema changes have worked before the feature is available to users.
  3. If you have to deploy to production, can you use feature flags to prevent a button/menu item from appearing? Then you can toggle it on for yourself and a few other colleagues before everyone else starts using it? This can be good if you have 1000s of users, you can roll it out to 5, 10, 25, 100 people etc, and can fix bugs while the user count is small.

There's an argument to say you should never have a migrate down option. Once a table or structure is changed, your application will change the underlying data in the database. At that point you any reverse migration will be from a different state. e.g. You create a table (but it's incorrect due to a bug), you want to change it but there's already data in it... the typical reverse of a CREATE TABLE is DROP TABLE, but you'll loose data. Instead you might have to migrate up with an ALTER TABLE, or another CREATE TABLE and copy data from one table to another.

Putting on my software engineering hat... Do you need more time to test changes before rollout? Are you spending too much time on fixing small issues? Does the code quality need to be improved? I get the impression that there's a level of urgency, but at the moment the software development processes are not adequate or the demands are too great?

14 Dec
1 year ago

gregrobson left a reply on PostgreSql Relationships Between Tables In Different Schemas

If you output the contents of your phpinfo, is there a section labelled "pdo_pgsql"? If not, you might need to locate your php.ini file and uncomment the line where your pdo_pgsql driver is loaded.

13 Dec
1 year ago

gregrobson left a reply on 5.5.23 And 5.5.24 Changes

@zaster There is also a feed for new releases that contains all the bugfixes/features implemented in each release.

Very handy for any new releases in the future :-)

gregrobson left a reply on The SQL SELECT DISTINCT Statement

@Alderwiereld DISTINCTwill return unique combinations of values. If you select three columns then DISTINCT will show unique combinations of the three columns (or unique "tuples" in database terminology).

If you want to use those unique values in another query then the easiest option is to put them in the FROM clause.

    -- Get all the distinct product_id's of ordered products.
    -- (i.e. only show each product once, regardless of how many
    -- times it has been ordered.
    (SELECT DISTINCT product_id FROM orders) AS purchased_products
    -- join back onto product table to show the full details of only
    -- those products that have been ordered.
    INNER JOIN products ON purchased_products.product_id = products.product_id

Obviously there are better ways than using DISTINCT for the query above, but it's the simplest way to demonstrate the concept.

gregrobson left a reply on PostgreSql Relationships Between Tables In Different Schemas

I don't think (although I might be wrong) that you can just set the $table value to include a different schema? e.g. $table = 'schema_alt.foos'.

What I have done in the past is alter my config to include a new connection - same database, different schema.

'pgsql' => [
    'driver' => 'pgsql',
    'host' => env('DB_HOST', ''),
    'port' => env('DB_PORT', '5432'),
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
    'charset' => 'utf8',
    'prefix' => '',
    'schema' => 'schema_main', // <----- Primary schema
    'sslmode' => 'prefer',

// Same database, different default schema
'pgsql_alt' => [
    'driver' => 'pgsql',
    'host' => env('DB_HOST', ''),
    'port' => env('DB_PORT', '5432'),
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
    'charset' => 'utf8',
    'prefix' => '',
    'schema' => 'schema_alt', // <----- Alternative Schema
    'sslmode' => 'prefer',

Then for models that need to reference the alternative schema:

class Foo extends Model
    $connection = 'pgsql_alt';

    // table will now become "schema_alt.foos" as it uses the second connection.

Of course, if you have a lot of schemas this might become a bit of a hassle to manage! The query builder will always allow you to join across schemas with far less hassle.

gregrobson left a reply on Eager Loading Is Taking A While

I agree with @DarkRoast - even if the productcolumn is indexed large WHERE IN() clauses can prove problematic for the query parser.

Once you get to a large number of items within the IN()the query engine will mostly likely decide to do one of two things:

  1. Realise the IN() contains most of the values in the table - it will scan the whole table anyway and ignore the index.
  2. Try and seek each individual entry from that clause one a time from the index.

If you're fetching more than 10 items as part of you IN() a join is going to perform better.

09 Dec
1 year ago

gregrobson left a reply on How Can I Filter Recipes Based On Their Average Rating?

It's almost midnight local time, so this isn't a full eloquent solution (I'm not sure it could be done cleanly anyway), but this might help (I'll keep notifications turned on!)

Here's a query to get the IDs of what matches your criteria (between a rating of 2 and 4 inclusive).

$recipeIds = \DB::table('recipes')
            ->join('ratings', '', '=', 'ratings.recipe_id')
            ->selectRaw('AVG(ratings.rating) AS average_rating')
            ->havingRaw('AVG(ratings.rating) >= ?', [2])
            ->havingRaw('AVG(ratings.rating) <= ?', [4])

it generates this query

    AVG(ratings.rating) AS average_rating
    recipes INNER JOIN ratings
        ON = ratings.recipe_id
    AVG(ratings.rating) >= 2 AND AVG(ratings.rating) <= 4


HAVING is effectively a WHERE clause that's applied after the aggregate function. So initially the database lists all the recipe IDs with at least one rating (because it's an INNER JOIN) alongside the average of each recipe's ratings (that's the GROUP BY statement). Once it has those details it pulls down only those HAVING an average between the two values.

You should be able to use whereIn() to pull out any appropriate records - although the performance of whereIn() might be slow if you have a lot of recipes to search through. (e.g. 200 out of 3000).

I haven't tested it, but I've read it through twice. I'm 99% certain that's correct :-)

gregrobson left a reply on Pagination With Dinamic Data.

Have a look at [forPageAfterId()[(

MyModel::forPageAfterId($perPage = 15, $lastId = 0, $column = 'id')

Say your first page loads (5 per page) and the IDs are 4, 6, 7, 9 and 12. You click next, but in the last few seconds another user deleted record 7. When you click next you would normally miss the record after 12 (as that now falls on page 1). Instead use forPageAfterId();

$lastId = 12; // last value from previous page
    ->forPageAfterId(5, $lastId)

Even though there are only 4 items on page 1, you still see the first 5 after the id of 12.

This will only work if paginating on the id column, but the principle is the same. In your code you need to track the last result of the current page (or first if navigating backwards) and whenever you requery add in an extra filter that says "the first X records that meet the criteria but start at X".

gregrobson left a reply on Redis Vs Amazon SQS For Queue

So far, I am favoring SQS due to its relative ease of use and robustness.

I think you answered your own question. As you say, Redis requires a different server and if the scale of what you are doing is not that great, it makes sense to keep it simple.

08 Dec
1 year ago

gregrobson left a reply on IDE For Laravel

As a side note: I've tried PHP Storm a couple of times, but the sheer scale of how long it takes to get a simple workflow going annoys me. Sublime is fast, stable and opens files pretty-near instantly. Once you master the keyboard shortcuts you can really fly with it.

(I do however love JetBrains' DataGrip for database usage as I use MySQL and PostgreSQL.)

gregrobson left a reply on IDE For Laravel

I had issues with Sublime's Blade syntax highlighting a while ago. When Sublime gets a bit "iffy" I've backup my preferences and keymap and make a note of my packages. I then completely wipe Sublime from my system and re-install it (often I don't reinstall a lot of the packages I had before).

Last time I did that I found everything started working again.

gregrobson left a reply on When To Make The Switch To Bootstrap 4?

Considering that it's in beta, I doubt there are going to be any major breaking changes to the CSS syntax. You might have the odd rendering glitch if you have complicated layouts however.

I normally base the decision on the when the production system is going live. If it's next week, I might hold off. If you're launching in two months time, I would go ahead as issues should be resolved before launching.