Member Since 2 Years Ago
2,380 experience to go until the next level!
In case you were wondering, you earn Laracasts experience when you:
Earned once you have completed your first Laracasts lesson.
Earned once you have earned your first 1000 experience points.
Earned when you have been with Laracasts for 1 year.
Earned when you have been with Laracasts for 2 years.
Earned when you have been with Laracasts for 3 years.
Earned when you have been with Laracasts for 4 years.
Earned when you have been with Laracasts for 5 years.
Earned when at least one Laracasts series has been fully completed.
Earned after your first post on the Laracasts forum.
Earned once 100 Laracasts lessons have been completed.
Earned once you receive your first "Best Reply" award on the Laracasts forum.
Earned if you are a paying Laracasts subscriber.
Earned if you have a lifetime subscription to Laracasts.
Earned if you share a link to Laracasts on social media. Please email [email protected] with your username and post URL to be awarded this badge.
Earned once you have achieved 500 forum replies.
Earned once your experience points passes 100,000.
Earned once your experience points hits 10,000.
Earned once 1000 Laracasts lessons have been completed.
Earned once your "Best Reply" award count is 100 or more.
Earned once your experience points passes 1 million.
Earned once your experience points ranks in the top 50 of all Laracasts users.
Thanks jlrdw, makes sense. So if anyone was researching the same thing, here is what I found (strange I haven't found similar summary in Nova docs):
I just used standard Laravel make:auth command to create my login interface, reset pass, users database structure etc. Then I installed Nova, which fluently started using this Laravel auth. It just has its own login page on /nova/login, which looks slightly different, but no matter if I login through this form, or standard make:auth's /login , it is the same user session and everything, so no duplication happens.
As I wanted simple user roles, I made a migration to add "role" column on users table, and then in app/Providers/NovaServiceProvider modified gate() method to validate if user has role "admin". This handles overall access to Nova, which is all I need. So no need to create Policy classes etc.
Overall I am very happy with how nicely Nova blends with the existing Laravel setup, great job. Hope it helps someone, cheers
Hi, on our new project we want to use Nova, but I am still confused if it is a good idea to use it together with basic authentication set by php artisan make:auth. We mostly want just a few admin users who will have access to Nova to administer everything, and then have regular users who need to be logged in for some actions in the front-end of the application but without any access to Nova at all.
So I am wondering if I should do artisan make:auth first to handle standard users and then install Nova on top of that, or if Nova itself can replace all middleware provided by make:auth and having both would be duplicit / bad practice? Obviously I just want a single users table and stick to best practices, but as the roles won't be mixed, I would like to simply say that admins can access nova, regular users cannot and don't worry about it much anymore. What setup would be most appropriate? Thanks a lot
@cronix - thanks a lot for explanation, that indeed makes sense and I like the idea of having one server specifically for testing updates. But the problem is, that I set up the servers through a quite long timespan of about maybe two years.. so some servers are Ubuntu 16, some Ubuntu 18, not sure if I have some on 14 as well..
So syncing them now to all run the same environment, that I could mirror into a test server looks pretty difficult.. or do you reckon it is something a user with just overall server knowledge can do?
Ok, thanks.. is there anyone else who could give their opinion? I was a bit hoping I could just pass this to CRON, set it once per week to some early morning hour and just not worry about it.. but I see all the benefits of doing it manually.
I have several servers to worry about, so I am looking for some efficient way so I don't kill half of my day doing this. Also, we usually have the staging environment on the same server, seemed like a good idea to make sure both envs run on same version of nginx, mysql etc.. So testing on staging server unfortunately isn't an option here.
@AUDUNRU - Hey Aundru, thanks, I am just wondering - at the bottom of the info text when I login it says *** System restart required ***. I always thought it is related to the updates, if not, than what can be the reason for this restart?
Hi, everytime I log to my Forge server, I am greeted with the message like: 150 packages can be updated ,1 update is a security update, so I am wondering how to address this.
On Digital Ocean, they recommend quite complex procedure with shutting the server down, making a snapshot while switched off, then switch back on and do sudo apt-update and sudo apt-upgrade.
I've been reading how often people usually do this and seems like often they do it even several times a week, which would be quite annoying if done always this way. Plus we also have production data there, so we obviously don't want to have outages that often. So what I am generally wondering is:
Thanks a lot
I would love to know this as well. I am using a single droplet for several WordPress sites, and it seems like a security issue. Obviously if one site gets hacked, the intruders have access to all other sites on the server. I am using Digital Ocean and they have a tutorial for separating users here: https://www.digitalocean.com/community/tutorials/how-to-host-multiple-websites-securely-with-nginx-and-php-fpm-on-ubuntu-14-04 , but as forge has a single "forge" user to connect to the server, I am worried I will lose access to all the other sites that I move under a different user
Hey Dunsti, thanks for pointing me in the right direction. Dusk seems to have a similar feature using .env.dusk. config file, so I just pointed Dusk tests there and use DatabaseMigrations in each test so database is only populated during the test and then emptied again. Will do just fine :)
Hi guys, I just started writing my first tests in Laravel 5.6 but I am confused about the best setup for the tests not to mess live database? I would like the tests to be available on both my local and on production so I can test with real application after I deploy a change, can this be done?
I use both Dusk and PHP Unit, and Dusk doesn't seem to support DatabaseTransactions, which would otherwise be my preferred option. If I use RefreshDatabase or DatabaseMigrations, they will afaik reset the DB, and if I don't use them, then test data will appear in live tables, so my latest assumption is, that Dusk tests cannot be run on live site, is that so or am I missing something?
I've read something about having a separate database for Dusk tests, also I could possibly mock some of the functionality but I would prefer a general solution where tests are run on whatever data are there at the moment, but the site would return to the state before test after testing is finished.
Or is there a better/recommended setup that I am missing? Thanks a lot.
@pbdev I would love to know this also. As far as I checked, the problem is in putFileAs() method of /vendor/laravel/framework/src/Illuminate/Filesystem/FilesystemAdapter.php which stream copies the file from temp folder to final location.
Not sure if there is any method to assign MimeType again once the file is copied, but apparently if the image with Mime Type "application/octet-stream" is served as src=".." for the image, FF, Chrome and Safari seem to serve it all good.
EDIT: I found here: http://stackoverflow.com/questions/29298313/images-stored-on-amazon-aws-s3-not-rendered-in-internet-explorer that IE might have problems, but probably that was because the images were missing extension. I just did some tests and they loaded correctly in FF, Chrome, Safari and IE 9,10,11.
So I believe that as long as you serve the image URLs as img src=".." attributes, it should be fine. Still it would be nice having this fixed in the future