Buy your loved one the ultimate gift. Lifetime gift certificates are $100 off.


Hire Me



15 Best Reply Awards

  • Member Since 3 Years Ago
  • 68 Lessons Completed
  • 5 Favorites

14th December, 2017

gregrobson left a reply on PostgreSql Relationships Between Tables In Different Schemas • 15 hours ago

If you output the contents of your phpinfo, is there a section labelled "pdo_pgsql"? If not, you might need to locate your php.ini file and uncomment the line where your pdo_pgsql driver is loaded.


13th December, 2017

gregrobson left a reply on 5.5.23 And 5.5.24 Changes • 1 day ago

@zaster There is also a feed for new releases that contains all the bugfixes/features implemented in each release.

Very handy for any new releases in the future :-)

gregrobson left a reply on The SQL SELECT DISTINCT Statement • 1 day ago

@Alderwiereld DISTINCTwill return unique combinations of values. If you select three columns then DISTINCT will show unique combinations of the three columns (or unique "tuples" in database terminology).

If you want to use those unique values in another query then the easiest option is to put them in the FROM clause.

    -- Get all the distinct product_id's of ordered products.
    -- (i.e. only show each product once, regardless of how many
    -- times it has been ordered.
    (SELECT DISTINCT product_id FROM orders) AS purchased_products
    -- join back onto product table to show the full details of only
    -- those products that have been ordered.
    INNER JOIN products ON purchased_products.product_id = products.product_id

Obviously there are better ways than using DISTINCT for the query above, but it's the simplest way to demonstrate the concept.

gregrobson left a reply on PostgreSql Relationships Between Tables In Different Schemas • 1 day ago

I don't think (although I might be wrong) that you can just set the $table value to include a different schema? e.g. $table = 'schema_alt.foos'.

What I have done in the past is alter my config to include a new connection - same database, different schema.

'pgsql' => [
    'driver' => 'pgsql',
    'host' => env('DB_HOST', ''),
    'port' => env('DB_PORT', '5432'),
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
    'charset' => 'utf8',
    'prefix' => '',
    'schema' => 'schema_main', // <----- Primary schema
    'sslmode' => 'prefer',

// Same database, different default schema
'pgsql_alt' => [
    'driver' => 'pgsql',
    'host' => env('DB_HOST', ''),
    'port' => env('DB_PORT', '5432'),
    'database' => env('DB_DATABASE', 'forge'),
    'username' => env('DB_USERNAME', 'forge'),
    'password' => env('DB_PASSWORD', ''),
    'charset' => 'utf8',
    'prefix' => '',
    'schema' => 'schema_alt', // <----- Alternative Schema
    'sslmode' => 'prefer',

Then for models that need to reference the alternative schema:

class Foo extends Model
    $connection = 'pgsql_alt';

    // table will now become "schema_alt.foos" as it uses the second connection.

Of course, if you have a lot of schemas this might become a bit of a hassle to manage! The query builder will always allow you to join across schemas with far less hassle.

gregrobson left a reply on Eager Loading Is Taking A While • 1 day ago

I agree with @DarkRoast - even if the productcolumn is indexed large WHERE IN() clauses can prove problematic for the query parser.

Once you get to a large number of items within the IN()the query engine will mostly likely decide to do one of two things:

  1. Realise the IN() contains most of the values in the table - it will scan the whole table anyway and ignore the index.
  2. Try and seek each individual entry from that clause one a time from the index.

If you're fetching more than 10 items as part of you IN() a join is going to perform better.

9th December, 2017

gregrobson left a reply on How Can I Filter Recipes Based On Their Average Rating? • 5 days ago

It's almost midnight local time, so this isn't a full eloquent solution (I'm not sure it could be done cleanly anyway), but this might help (I'll keep notifications turned on!)

Here's a query to get the IDs of what matches your criteria (between a rating of 2 and 4 inclusive).

$recipeIds = \DB::table('recipes')
            ->join('ratings', '', '=', 'ratings.recipe_id')
            ->selectRaw('AVG(ratings.rating) AS average_rating')
            ->havingRaw('AVG(ratings.rating) >= ?', [2])
            ->havingRaw('AVG(ratings.rating) <= ?', [4])

it generates this query

        AVG(ratings.rating) AS average_rating
    recipes INNER JOIN ratings
        ON = ratings.recipe_id
    AVG(ratings.rating) >= 2 AND AVG(ratings.rating) <= 4


HAVING is effectively a WHERE clause that's applied after the aggregate function. So initially the database lists all the recipe IDs with at least one rating (because it's an INNER JOIN) alongside the average of each recipe's ratings (that's the GROUP BY statement). Once it has those details it pulls down only those HAVING an average between the two values.

You should be able to use whereIn() to pull out any appropriate records - although the performance of whereIn() might be slow if you have a lot of recipes to search through. (e.g. 200 out of 3000).

I haven't tested it, but I've read it through twice. I'm 99% certain that's correct :-)

gregrobson left a reply on Pagination With Dinamic Data. • 5 days ago

Have a look at [forPageAfterId()[(

MyModel::forPageAfterId($perPage = 15, $lastId = 0, $column = 'id')

Say your first page loads (5 per page) and the IDs are 4, 6, 7, 9 and 12. You click next, but in the last few seconds another user deleted record 7. When you click next you would normally miss the record after 12 (as that now falls on page 1). Instead use forPageAfterId();

$lastId = 12; // last value from previous page
    ->forPageAfterId(5, $lastId)

Even though there are only 4 items on page 1, you still see the first 5 after the id of 12.

This will only work if paginating on the id column, but the principle is the same. In your code you need to track the last result of the current page (or first if navigating backwards) and whenever you requery add in an extra filter that says "the first X records that meet the criteria but start at X".

gregrobson left a reply on Redis Vs Amazon SQS For Queue • 5 days ago

So far, I am favoring SQS due to its relative ease of use and robustness.

I think you answered your own question. As you say, Redis requires a different server and if the scale of what you are doing is not that great, it makes sense to keep it simple.

8th December, 2017

gregrobson left a reply on IDE For Laravel • 6 days ago

As a side note: I've tried PHP Storm a couple of times, but the sheer scale of how long it takes to get a simple workflow going annoys me. Sublime is fast, stable and opens files pretty-near instantly. Once you master the keyboard shortcuts you can really fly with it.

(I do however love JetBrains' DataGrip for database usage as I use MySQL and PostgreSQL.)

gregrobson left a reply on IDE For Laravel • 6 days ago

I had issues with Sublime's Blade syntax highlighting a while ago. When Sublime gets a bit "iffy" I've backup my preferences and keymap and make a note of my packages. I then completely wipe Sublime from my system and re-install it (often I don't reinstall a lot of the packages I had before).

Last time I did that I found everything started working again.

gregrobson left a reply on When To Make The Switch To Bootstrap 4? • 6 days ago

Considering that it's in beta, I doubt there are going to be any major breaking changes to the CSS syntax. You might have the odd rendering glitch if you have complicated layouts however.

I normally base the decision on the when the production system is going live. If it's next week, I might hold off. If you're launching in two months time, I would go ahead as issues should be resolved before launching.

7th December, 2017

gregrobson left a reply on Check If A Model Uses Soft Deleting • 1 week ago

Use @ctf0's code in an accessor attribute:

// In your model...
public function getSoftDeletingAttribute()
    // ... check if 'this' model uses the soft deletes trait
    return in_array('Illuminate\Database\Eloquent\SoftDeletes', class_uses($this)) && ! $this->forceDeleting;

$myModel->soft_deleting; // True if the SoftDeletes trait is in use AND forceDeleting is off.

gregrobson left a reply on Month Changing • 1 week ago

Sounds like a really good "real world" scenario for practising your coding! :-)

6th December, 2017

gregrobson left a reply on Month Changing • 1 week ago

Building on @Cronix's suggestion, have some way to track when a payment was last processed for a given record. If processing fails entirely or mid-way it's good to be able to trace back who has/hasn't been processed, then you have options for re-running the previous day(s) transactions.

gregrobson left a reply on Month Changing • 1 week ago

@kennybjr87, a note of caution on the answer by @tykus

Blindly using addMonth() may have undesired consequences if you are looking to do something monthly like take rent (I'm guessing from your scenario).

// From the docs
$dt = Carbon::create(2012, 1, 31, 0);
echo $dt->toDateTimeString();            // 2012-01-31 00:00:00

echo $dt->addMonth();                    // Results in 2017-03-03 00:00:00
// Note that 1st January jumped to 3rd March, no February payment taken!

I've looked and this isn't documented on the docs website, but is in the source: addMonthsNoOverflow($value)

You can see from the source that if adding a month results in the day of the month changing (in this case 31st to the 3rd) it will back up to the last day of the previous month. e.g. 1st January will shift to 28th February (non-leap years)... May 31st would shift to June 30th etc.

If this used for taking a payment or similar then I've seen some applications that take payments from your bank only allow you to choose days of the month between 1 and 28 for payment processing. Therefore the day of the month never changes and you are saved a whole heap of bother having to deal with the day of the month changing!

If only our ancestors had made the calendar metric! :D

3rd December, 2017

gregrobson left a reply on Are Eloquent Relations Faster? • 1 week ago

Raw queries will be faster to compile - as the Laravel framework has to take the tables, joins, filters etc and write out the query for your needs. In the vast majority of cases this is going to take milliseconds and unless you have very high performance requirements it's not a big consideration.

The only caveat is that Eloquent will build queries based on the logic that's coded into it. For some complex queries (lots of joins, and WHERE constraints) it might build a query that "technically works" and brings back the right data, but that query is not optimised for the database engine. e.g. you could write a query with subqueries, but if you are using Postgres, you might be able to rewrite it using Common Table Expressions and this might be executed faster by the DBMS and still bring back the correct data.

gregrobson left a reply on How To Join Multiple Tables With Eloquent Relationships • 1 week ago

As I see it you have three choices:

  1. Live with the limits of eloquent and have to chain your query through the various models. This might perform slowly depending on the quantity of data. Eloquent is there to make most querying easy, but it won't always produce a simple equivalent in code.
  2. Write your query using the query builder:
  3. Do a DB::select() and put all your joins in a single manual query.

I would go with option 2 in this case.

2nd December, 2017

gregrobson left a reply on Laravel Validation • 1 week ago

In addition to modifying the HTML, it's also possible to send a request to Laravel using tools like curl or Postman (

It does lead to duplication, but there's no avoiding it I'm afraid.

29th November, 2017

gregrobson left a reply on Should We Concern About Choosing DMBS With Laravel? • 2 weeks ago

Personally I have worked with smaller sized databases with MySQL, PostgreSQL and a larger database with SQL Server (30GB_).

For a typical application (CMS, CRM, website backend) I know all three of those will work fine. All of them have strengths/weaknesses:

  • SQL Server has a nice GUI admin and good docs, but hosting it on AWS RDS costs more and advanced features cost more.
  • MySQL is "mostly" open source, loads of tools/packages support it, it's cheap to host on AWS RDS. Sometimes you'll get stuck on the syntax and will be searching Stack Overflow for how to get parts of a date/time value.
  • PostgreSQL is truly open source (you can fork it and do what you like). Documentation is ace and all the functions are really consistent - just check out the date/time/timezone functions - they are a DREAM to work with However if you're system relies on high rates of updates the MVCC system is weaker (you can manage it, but it's harder)

Personally my first preference is PostgreSQL currently. It's a whole enterprise-level DBMS, is cross platform, is actively developed (v10 introduced loads of features), is super-reliable (the code base goes back over 30 years!) and the when I do step out of the query builders I have all the cool stuff like Common Table Expressions and Window Functions (Google them and thank me later). Character encodings have never been an issue like MySQL: Postgres has been UTF-8 by default for years. You don't have to choose table types like MySQL (InnoDB, MyISAM etc) - Postgres made one table engine and made it great. Also in Postgres ANY DDL statements (add column, drop table, add index etc) can be rolled back and are transaction safe (apart from add/drop database/tablespace, but they are special). You can also use Foreign Data Wrappers to make data sources (MySQL, Redis, filesystems) appear as tables in a Postgres database and be queryable!

Nobody owns Postgres - most of the consultancy companies pay a couple of their employees to work on features for their own needs - these then go into the main Postgres codebase fore everyone.

(Yeah, I might be a bit of a Postgres fan!)

gregrobson left a reply on Timezone Europe/Amsterdam Doesnt Insert Correct Time In Database • 2 weeks ago

Note that depending on database implementation timestamps might appear as UTC+offset. E.g. a local time of 15:34 for Europe/Amsterdam might appear as 14:34+01:00 in the database.

Can you select the value in the database as a UTC value? (I don't know what DMBS you are using.) In the above example you should just get the 14:34 back.

27th November, 2017

gregrobson left a reply on Remove Duplicated Rows • 2 weeks ago

What database system/version are you using? I know a technique but it won't work with MySQL!

24th November, 2017

gregrobson left a reply on Validator Uniqueness Vs DB Uniqueness Constraint • 2 weeks ago

@wilburpowery is absolutely right. Just to add a couple of other benefits...

  1. Any column defined as UNIQUE normally gets an index applied to it automatically (certainly the case for PostgreSQL). Any unique checks by the validator will be faster as a result.

  2. There can also be an edge case of simultaneous requests. Here's a somewhat silly example:

  • Your database only allows unique email addresses. Currently the email "[email protected]" does not exist in the database.
  • Two users submit requests at the same time to insert the email address of "[email protected]".
  • Both hit the validation at the same time - they both pass as "[email protected]" is not in the table (yet).
  • Both processes execute some logic which takes an arbitrary amount of time and then try to do an INSERT to the table (believing the value doesn't exist as per validation).

If there's no UNIQUE on the column both inserts will succeed. Your database no longer contains valid data and any future SELECTs might not return consistent results!!! :O

What you do here depends on circumstances. The time between validation and INSERT is probably only going to be milliseconds in 99.9% of circumstances. You could wrap the validation and INSERT in a transaction, but that seems overkill (and messy!) Holding transactions open for a longer period of time is not going to help database performance for 0.1% of cases!

Instead: we leave the UNIQUE constraint on the column. In the (very unlikely) event of two simultaneous requests with duplicate values they both pass validation, but then the second INSERT will fail with a DB error as the DB has to process the INSERTs sequentially. Obviously a DB error isn't great, but you'll soon know why and in most cases, it might never arrive.

Another point - what if two apps run off your database and you don't control the code for the other app?

Always treat your database as a "single source of truth" - it should never be allowed to come into a state that's inconsistent with the "truth" it represents.

21st November, 2017

gregrobson left a reply on Soft Delete On Columns With 0 Or 1 Instead Of Timestamp • 3 weeks ago

Excellent ?

You will notice in the file you extended:

// Illuminate\Database\Eloquent\SoftDeletes.php, lines 19-22

    public static function bootSoftDeletes()
        static::addGlobalScope(new SoftDeletingScope);

This is where Laravel adds a scope for the queries on the model. (Laravel's internals just uses the query scopes that are mentioned in the main documentation.)

Again, if you extend SoftDeletingScope to MySoftDeletingScope you can override it's behaviour... here are a few notes on some of the methods, I think you'll get the idea to amend all the applicable methods that are adding where() clauses to the queries.


// When the scope is added, by default it applies whereNull() on the deleted_at column... just change this to where($model->getQualifiedDeletedAtColumn(), 0);
    public function apply(Builder $builder, Model $model)

// The extend() method calls addWithoutTrashed(), addWithTrashed(), addWithoutTrashed(), addOnlyTrashed().
// e.g. addWithoutTrashed() becomes MyModel::withTrashed() when you use the query builder.
// Each of the four methods adds bits to the query it's constructing so you just need to amend the queries sections.

// ... e.g here withoutTrashed() will add whereNull() on the deleted at column. Just amend this to your needs.
    protected function addWithoutTrashed(Builder $builder)
        $builder->macro('withoutTrashed', function (Builder $builder) {
            $model = $builder->getModel();


            return $builder;

Once you change your new trait to use the new scope it should apply the different parameters for withTrashed(), withoutTrashed() etc.

gregrobson left a reply on Database Structure • 3 weeks ago

If you have a lot of fields that are diverging from a common set then a 1-to-1 relationship would be the best option.

Loose table definitions:

clients(id, type, middlename, email ...)
person(id, client_id, pass_number, pass_lastname, pass_name ...)
entity(id, client_id, entity_osition, entity_name, entity_unp ...)

// All the clients
... SELECT * FROM clients INNER JOIN person ON = person.client_id
// All the entities
... SELECT * FROM clients INNER JOIN entity ON = entity.client_id

The id fields on person and entity are optional, but I tend to put them in for future use. Unique keys will be required on client_id for both of those tables to enforce 1-to-1 and prevent 1-to-many.

20th November, 2017

gregrobson left a reply on Soft Delete On Columns With 0 Or 1 Instead Of Timestamp • 3 weeks ago

Yes, although it's not super-easy. Have a look at Illuminate\Database\Eloquent\SoftDeletes.php

If you extend this trait, you can override some of the behaviour: in particular runSoftDelete()as this assigns a time to the DELETED_ATcolumn. (Restoring is easy as the DELETED_AT column is set to null.

After you have extended it, do use MySoftDeletes; or whatever you call it) on the class you created instead of the default implementation.


  1. You need to be certain that this is absolutely right, you don't want to accidentally flag data as deleted!
  2. I don't know whether this implementation is of your choice, but why throw away extra information and make life harder for yourself? It's a little bit more data to store, but it might be handy in the future.
  3. An alternative might be to use a different "deleted_at" column (e.g. 'removed') and to use query scopes to filter the result set to ignore where removed = 1

19th November, 2017

gregrobson left a reply on Carreer Advice • 3 weeks ago

Many criticise PHP, although it has shaken some of the stigma by becoming a lot faster since PHP7. (WordPress and Laravel adoption have helped as well.)

Certainly in the UK you can get a decent salary as a PHP dev and the jobs are probably more common than .NET.

I agree with @mdecooman about the language being irrelevant. After you get comfortable with what you choose, pick a 2nd (perhaps JavaScript as it has a different approach with all the async parts) and start to learn about design philosophies (DRY, SOLID, Domain Driven Design etc) ... those skills are transferable to any language.

18th November, 2017

gregrobson left a reply on SQL Qurry • 3 weeks ago

If you already have the model, then adding an accessor attribute would be easiest:

// Allows you to use $size->size_id to return value
public function getIdAttribute()
    return 'S' . $this->size_id;

gregrobson left a reply on Storing API Keys And Secrets In Database • 3 weeks ago

I'm not an expert, but obviously anything where the encryption can be reversed (unlike a password hash) could get compromised if the application key is discovered. There's just no avoiding that.

If someone accesses the database only that's fine, without the application key, they cannot decrypt the values. If they get both app key and database then the API keys are vulnerable.

Off the top of my head best practices include making sure that any API key given to your application should follow the "principle of least privilege".

  • If the API key is only to allow sending of emails - that's all the API key should have the permission to do: no user creation, template usage, account manipulation etc. I know you can set the abilities for SparkPost, I imagine the others are similar.
  • Most API keys can be restricted to only be usable from a range of email addresses. Your users should be provided with the IP addresses of your servers that will be sending requests to the service API endpoints so that the API keys are only valid for requests from your servers.
  • If users can cap usage, then that's an idea as well - to avoid anyone compromising your server and a customer's API excessively (it's a less likely scenario)
  • The keys they provide should be unique to your service, in case you have to ask them to cancel them (see below).

In the (hopefully unlikely) event of a compromise - have a defined procedure of how you will deal with it. In this case it will most likely be an email to customers to tell them to cancel the keys they gave you to stop them being used.

If your database is compromised/inaccessible... how will you email them? Will you have an alternative location (backup) where you can get the current list of active customers?

This should reduce the scope of what people can do. In the above scenario, they would only be able send more emails by making more API calls from your server - that's going to be easy to spot and easy to halt. You then have a very direct route to notify users and get them to arrange new API keys.

15th November, 2017

gregrobson left a reply on Raw SQL To Laravel Query • 4 weeks ago

I see a lot of people that are tempted to use the query builder as a requirement for all queries, but it reaches a limit where the "niceness" of writing your query in PHP is undone as you loose sight of the query and keeping the code maintainable.

If you got it working with the query builder and had to change it, you would need to spend a lot of time debugging it, or keep a copy of the SQL equivalent as a comparison ( toSql() is not going to give you decent indentation of the query).

Once you start using window functions it's rarely worth the effort.

My advice would be to use the repository pattern (Jeffrey has a tutorial on here about it) and to put the whole query into a DB::select($query, $bindings) so you can format it properly. (If you need to use collection functions on the result, just wrap it collect(DB::select("QUERY HERE"));

Then if you need to test parts of the query, you can copy and paste them into your SSMS or other software and transfer them back in when they work.

          ORDER BY
              WHEN (
              indesign.status = 4
              OR indesign.statusdate IS NULL
            ) THEN getdate() + 2
              ELSE indesign.statusdate
          ASC) AS RowNum,
    FROM sapakInAdminOrder a
    LEFT JOIN currency cu
      ON = a.currency
    LEFT JOIN moodboards m
      ON = a.orderMoodboardID
    INNER JOIN Clients b
      ON a.clientID =
    LEFT JOIN moodboards mc
      ON = b.moodboardID
    INNER JOIN Sapakim c
      ON b.sapakID =
    LEFT JOIN Sapakim sm
      ON = c.managerid
    LEFT JOIN products p
      ON = a.productKey
        SELECT *
            FROM orderCommunication
          ) f
        WHERE r = 1
      ) chat
      ON chat.orderId =
              ROW_NUMBER() OVER(PARTITION BY orderid ORDER BY id DESC) AS r
            FROM SapakimInAdminDesigns
          ) f
        WHERE r = 1
      ) indesign
      ON = indesign.orderid
      (a.isDeleted IS NULL OR a.isDeleted != 1)
      AND c.inAdminManagerID =
          SELECT id
          FROM sapakim
          WHERE sapakguid='test'
      AND c.sapakguid = 'test'
      AND a.isFreeDesign = 0
      AND a.transactionID = -1
      AND (
        a.designerPaid IS NULL
        OR a.designerPaid = 0
      AND (
        chat.sentToPrinter IS NULL
        AND chat.sentToManager IS NULL
        AND chat.sentToDesigner IS NULL
  ) bb
  RowNum >= 1
  AND RowNum < 31

14th November, 2017

gregrobson left a reply on How To Use Database Engine's UUIDs With Eloquent? • 1 month ago

@martinbean - you learn something new every day!

11th November, 2017

gregrobson left a reply on UUID Usage • 1 month ago

UUIDs are more beneficial for really large scale distributed systems as they are very nearly guaranteed to be unique (you might 100s of years before getting a duplicate even if you produced thousands every hour).

I agree with @shez1983 - integers (4 bytes) or big integers (8 bytes) are smaller than UUIDs (16 bytes). This is not just a storage issue - bigger data types mean more memory required for database joins and larger indexes.

In additional they are not sequential - depending on database vendor this makes them a lot less efficient as any primary key index or other index will be filled in a random order causing the index to fragment.

Hash IDs are better for obfuscation (but are not cryptographically secure).

10th November, 2017

gregrobson left a reply on How To Use Database Engine's UUIDs With Eloquent? • 1 month ago

Glad we got it sorted in the end. Always nice to see other Postgres users on here :-)

gregrobson left a reply on Missing Required Client Configuration Options: Region While Trying To Implement S3 • 1 month ago

I haven't used tinker - but I believe it doesn't reload config/env environment variables on demand. That might be why you weren't seeing changes?

Incidentally - there's a package that will allow you to reload your session/state when using tinker.

gregrobson left a reply on How To Use Database Engine's UUIDs With Eloquent? • 1 month ago

Okay, if it's saving then what is the output if you do dd($user); after you save. Perhaps looking at the instance will give something away?

gregrobson left a reply on Missing Required Client Configuration Options: Region While Trying To Implement S3 • 1 month ago

Ah okay.

// Your config (/config/filesystems.php) should contain
's3' => [
    'driver' => 's3',
    'key' => env('AWS_KEY'),
    'secret' => env('AWS_SECRET'),
    'region' => env('AWS_REGION'),
    'bucket' => env('AWS_BUCKET'),

// the environment file (/.env) should contain soemthing like this.

Before testing any changes run php artisan config:clearjust in case you have any cached config values. You haven't mentioned anything about caching previously, but it's best to rule that out as well.

gregrobson left a reply on Missing Required Client Configuration Options: Region While Trying To Implement S3 • 1 month ago

Can you echo out the contents of config('filesystems.disks.s3') so it is possible to check that the right env() values are being inserted. (Feel free to censor anything confidential.)

I think @robrogers3 is thinking that it's best to confirm the config has loaded correctly before looking elsewhere at what the problem might be.

9th November, 2017

gregrobson left a reply on How To Use Database Engine's UUIDs With Eloquent? • 1 month ago

In your model is the incrementing property set to false?

class MyModel extends Model
    protected $primaryKey = 'uuid';

    public $incrementing = false;

Primary keys are assumed to be integers by default (and incrementing) so it's probably trying to cast the uuid string to an integer and returning null.

If that doesn't work, then you might need to explicitly cast the uuid property as well.

    $casts = [
        'uuid' => 'string',

6th November, 2017

gregrobson left a reply on Why Do Model Properties Work • 1 month ago

If you look at your model classes you will see they extend: Illuminate\Database\Eloquent\Model

// When you do
echo $cat->noise;
// PHP first checks for that attribute, if it does **not** exist the magic getter is called
// See here:
public function __get($key)
    return $this->getAttribute($key);
// So the model will return

getAttribute($key) doesn't exist in the Eloquent model directly, but is imported via the trait Illuminate\Database\Eloquent\Concerns\HasAttributes if you look in that file you will see:

public function getAttribute($key)
    if (! $key) {
    // If the attribute exists in the attribute array or has a "get" mutator we will
    // get the attribute's value. Otherwise, we will proceed as if the developers
    // are asking for a relationship's value. This covers both types of values.
    if (array_key_exists($key, $this->attributes) ||
        $this->hasGetMutator($key)) {
        return $this->getAttributeValue($key);
    // Here we will determine if the model base class itself contains this given key
    // since we don't want to treat any of those methods as relationships because
    // they are all intended as helper methods and none of these are relations.
    if (method_exists(self::class, $key)) {
    return $this->getRelationValue($key);

So essentially if the public property doesn't exist, it is checked to see if it exists in $this->attributes If it exists or has a mutator, then getAttributeValue() is called - this is where any casting or custom mutations are done. If all else fails it returns a relation value (if it exists), e.g. $comment->post might return the post that belongsTo() the comment as it's a relation.

The same works in reverse so $cat->noise = 'purr' will end up calling __set($key, $value) with __set('noise', 'purr

It's worth reading through the model and the concerns that are imported. There are some useful public methods that are not in the Laravel Docs because they are not needed as frequently. However they have helped simplify my code from time to time.

5th November, 2017

gregrobson left a reply on Adding A Space.... • 1 month ago

Agreed @shez1983 - My expertise is SQL - when I saw the original question, I overlooked the Laravel best practice! Doh!

gregrobson left a reply on Adding A Space.... • 1 month ago

@jimmck While || is the SQL "standard"... - it's not supported in SQL Server. So it's not quite as standard as it could be!

gregrobson left a reply on Adding A Space.... • 1 month ago

Aha, I see: be careful with your single quotes, try:

  \View::share('sel_userid', User::selectRaw('first_name + \' \' + last_name as full_name, id')->where('status', 1)->pluck('full_name', 'id'));
// or 
  \View::share('sel_userid', User::selectRaw("first_name + ' ' + last_name as full_name, id")->where('status', 1)->pluck('full_name', 'id'));

without escaping them PHP is seeing 'first_name + ' and ' + last_name as full_name, id' as two strings one after the other.

gregrobson left a reply on Adding A Space.... • 1 month ago

That should work... what error are you getting?

Personally, I would select both separately and just use...

{{ $first_name }} {{ $surname}} your template. It's more flexible to make changes to your presentation in your presentation layer than to change it your database layer.

4th November, 2017

gregrobson left a reply on Selected="selected" Doesn't Work In Vue? • 1 month ago

I think you have overlooked how Vue's model binding works. I've forked a template on Codepen with an example.

N.B. When Vue starts up it will remove items like selected="selected" - the Vue model determines the properties - nothing else.

You will see in the Javascript on line 743 I have put selected: 'GB', - when Vue has instantiated it will see "GB" and will set appropriate item as selected in the form element.

If you are using Vue then you should just have a template in HTML.

<select v-model="chosenComp">
    <option v-for="comp in competitions" v-bind:value="comp.slug">{{ }}</option>

...where your data resembles...

data: {
  chosenComp: "",
  competitions: {
    { slug: "game-of-life", name: "Game of Life" }
    // more competitions here

After data has been updated assign chosenComp = competitions[0].slug - then v-model gets updated and the first option will be highlighted. (Assuming that you have at least 1 competition in the array).

3rd November, 2017

gregrobson left a reply on Recommendation For Laravel Printed Book • 1 month ago

@mikestratton Laravel Up and Running is by @mattstauffer - he has written many blog posts about Laravel topics so I imagine it's a good book :-)

1st November, 2017

gregrobson left a reply on DB:raw,selectraw Are They The Same? • 1 month ago

They are almost the same. selectRaw allows bindings... looking at the source code:

    // Line 232, /Illuminate/Database/Query/Builder.php
    public function selectRaw($expression, array $bindings = [])
        $this->addSelect(new Expression($expression));
        if ($bindings) {
            $this->addBinding($bindings, 'select');
        return $this;

    // Line 835 /Illuminate/Database/Connection.php
    public function raw($value)
        return new Expression($value);

Based on the method signatures

// You can do bindings with selectRaw()
->selectRaw('complex_thing(column_name, ?)', [123]);

// but there's not a way to do bindings with DB::raw()
->select(DB::raw('no_bindings_allowed('fixed', 'values', 42));

You could manually insert a value into the DB::raw() string above, but you need to validate to make sure that code injection is not possible.

31st October, 2017

gregrobson left a reply on Payment Gatways • 1 month ago

If your client wishes to have multiple payment gateways then you can add the variables for each one in your .env file.

You seem to hint at managing them: I'm guessing that means setting API keys? If that's the case then you should definitely be encrypting any API keys that are being stored in the database.

gregrobson left a reply on Query Expection • 1 month ago

You have a space after the word 'slug'

// the query error says
where `slug ` = 'lorem'
// (note the space before the final backtick)

25th October, 2017

gregrobson left a reply on Crazy Domains Issue • 1 month ago

Have you tried checking what records are showing on the internet?

Could you post up the domain name, ip address of server etc? It might be useful to anyone helping you solve the problem.

23rd October, 2017

gregrobson left a reply on Migration Question • 1 month ago

As far as I know there is no "ready to roll" solution that exists. Possibly because this is not as easy as looping through each database and running the migration.

For point 1 - you might want to keep a "clean state" SQL script that can be run on the appropriate connection to create the tables up to the current moment and populate any defaults or rows that are required for a new installation.

Point 2, running a migration on all tenants: You might have a common table that has the details of each connection and what "version" the schema is. e.g. The current version is 1701 (I'm using the first two digits as major version changes and last two as minor).

A script can check each tenant and apply any migration files or raw scripts that are higher than the current version. e.g if the current is 1701, then 1702.sql, 1703.sql, 1800.sql and 1801.sql might be run against that tenant. Each one would update the data for the tenant so the same migration is not run again.

I would advise making your own migration tool/artisan command. Depending on the size of your tenant databases you might run into the following:

  • Long-held table locks while doing DDL statements (e.g. adding/dropping columns)
  • High CPU and I/O when tables are being rewritten
  • Higher bandwidth and disk space usage: the DB engine might need large temporary files while making structural changes, in addition the DB engine might write a lot of data to the Write Ahead Log and that might be replicated to another machine.

I think the above are reasons why there is no "fits all" tool.

The easiest way to resolve the above is to have a mechanism to only allow major migrations between certain hours, or be able to throttle migrations over a longer time span to not increase the overhead on the server.

When deploying you might want to separate deployments or have flags for activating features. After deploying a migration it might be hours (even days) before you have run through all of the tenants. Once you have validated they have all successfully updated you can activate/deploy the application code for the new feature.

I hope this helps. It's not a particularly hard problem, but the solution is often unique to the application being built. (You could have 1000s of small tenants, or just a hundred really big tenants!)

Stripe has an excellent blog post on how they migrate tables in their huge database:

21st October, 2017

gregrobson left a reply on Payment Using Stripe In Multi Tenancy • 1 month ago

If you're asking about each tenant having their own Stripe account that's unique to them, then Stripe Connect is the product you're looking for.

Each tenant/customer that signs up (e.g. a plumbing business) gets their own account. You can use Stripes APIs to automatically assign accounts to each person so they have their own account (seamlessly in the background). You can then take fees when people are transacting through the platform (similar to how Uber deducts fees when you pay through their platform to pay a taxi driver).

Edit Your Profile

Want to change your profile photo? We pull from