Member Since 3 Years Ago
4,570 experience to go until the next level!
In case you were wondering, you earn Laracasts experience when you:
Earned once you have completed your first Laracasts lesson.
Earned once you have earned your first 1000 experience points.
Earned when you have been with Laracasts for 1 year.
Earned when you have been with Laracasts for 2 years.
Earned when you have been with Laracasts for 3 years.
Earned when you have been with Laracasts for 4 years.
Earned when you have been with Laracasts for 5 years.
Earned when at least one Laracasts series has been fully completed.
Earned after your first post on the Laracasts forum.
Earned once 100 Laracasts lessons have been completed.
Earned once you receive your first "Best Reply" award on the Laracasts forum.
Earned if you are a paying Laracasts subscriber.
Earned if you have a lifetime subscription to Laracasts.
Earned if you share a link to Laracasts on social media. Please email [email protected] with your username and post URL to be awarded this badge.
Earned once you have achieved 500 forum replies.
Earned once your experience points passes 100,000.
Earned once your experience points hits 10,000.
Earned once 1000 Laracasts lessons have been completed.
Earned once your "Best Reply" award count is 100 or more.
Earned once your experience points passes 1 million.
Earned once your experience points ranks in the top 50 of all Laracasts users.
I even subscribed to it, but I probably didn't read that part ^^ I will do better!
But it would actually be great if the problem is known, if Laravel himself offers an implementation that the problem no longer exists.
I have also encountered other problems related to heavy loads: It is great to check with
Cache::has if something is in the cache. But in the next moment it might happen that when you fetch something from the cache with
Cache::get it no longer exists. This forced us to adapt code to make it suitable for high load.
Incredibly fast your answer! Thank you, that looks very good, we will try it out!
Started a new Conversation Best Practice: FirstOrCreate On High Load System
We currently have an interesting problem on a system with higher access rates: We are providing an Api Call, which outputs information with appropriate parameters. At the beginning we have a call in the corresponding controller
$user = \App\Models\User::firstOrCreate([ 'email' => auth()->user()->email ]);
Now we have already had some situations where we found in the logfile the message that the email address could not be added because it already exists (email field is set to unique). In all situations the identical api call is sent three times in a row and then the error occurs. I am convinced that in a fraction of a second the situation occurs that the data set e.g. from the first call has not yet been written, the second call does not find it and tries to create it. In the meantime, the database transaction for creating the record has been executed.
My idea now would be to no longer use firstOrCreate but to execute a search (first) first. If the result is empty, I would create the data set with a Create and if necessary I would generate a short artificial pause with a sleep in the situation to be sure that the data was written. Is that so practical? The alternative would be to create the data sets in advance, but actually I wanted to create the Just in Time to reduce the amount of data to the necessary amount.
What is a good way to proceed here?
Started a new Conversation Anyone With Tips About WSL2 With VS Code And Docker Setup (LAMP With Docker)?
The last few days I used my free time and dealt with some topics that should help me to make my development more efficient. Now I've reached the point "How can I set up a docker for use with VS code". So far I have read the documentation and I can also set up a running container with a code folder in it and also get "php artisan serve" running. Next I want to improve the setup and make the environment more like on the production system. That means I want to create different containers for php, redis and mysl with docker-compose. I'm a bit stuck at this point and may not see any trees because of the forest. So far I'm experienced with docker and also run some servers productively with it, what's missing is the experience how to connect it with VS code.
So my question is: Does anybody have WSL2 running with docker and below that docker-compose with a LAMP setup, like I'm want? And can anyone give me tips on how to get it running or how to integrate it with VS Code? In my case I want run a (API) backend and want develope there.
My sample docker-compose.yml:
version: "3" services: proxy: image: nginx:alpine restart: "no" volumes: - ../../core/setup/preset/proxy/etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf - ../../core/setup/preset/proxy/srv/www:/srv/www - ./data/proxy/var/log/nginx:/var/log/nginx env_file: - ./config/docker.env depends_on: - backend - management - administration networks: - webservices - default redis: image: redis restart: "no" sysctls: - net.core.somaxconn=512 database: image: mysql:5.7 restart: "no" env_file: - ./config/database.env volumes: - ./data/database/var/lib/mysql:/var/lib/mysql backend: build: context: /docker/core/setup/files/php72-apache-vdb dockerfile: Dockerfile restart: "no" #ports: # - "8080:80" working_dir: /var/www/html volumes: - ./data/backend/srv/www:/var/www/html - /docker/core/setup/preset/backend-apache/etc/crontab:/etc/crontab - /docker/core/setup/preset/backend-apache/run.sh:/run.sh - /docker/core/setup/preset/backend-apache/etc/apache2/envvars:/etc/apache2/envvars - /docker/core/setup/preset/backend-apache/etc/apache2/apache2.conf:/etc/apache2/apache2.conf - /docker/core/setup/preset/backend-apache/etc/apache2/sites-enabled/000-default.conf:/etc/apache2/sites-enabled/000-default.conf depends_on: - database - redis env_file: - ./config/backend.env management: build: context: /docker/core/setup/files/nginx-vuejs dockerfile: Dockerfile restart: "no" environment: NODE_ENV: production working_dir: /app restart: always volumes: - ./data/management/app:/app - ./data/management/var/log/nginx:/var/log/nginx command: ["nginx"] administration: build: context: /docker/core/setup/files/nginx-vuejs dockerfile: Dockerfile restart: "no" environment: NODE_ENV: production working_dir: /app restart: always volumes: - ./data/administration/app:/app - ./data/administration/var/log/nginx:/var/log/nginx command: ["nginx"]
@nexxai Currently we are primarily concerned with reliability and not with a possibly high load. So our setup would be currently okay. But I take the constructive feedback with me that if the workload increases, we will adjust the setup.
@martinbean Am I too afraid, is my fear unfounded? Or are there any challenges I would have to consider when a queue is running on 2 servers simultaneously?
Thanks for your very comprehensive and detailed answer! To your suggestion not to run the queue on the same server: Our application is based on different microservices. The number of instances is scaled according to the required availability and performance or security against failures. Accordingly, I think, a running worker in each instance should be ok. As long as no instance fails, the work should be distributed somehow. But if one instance fails, it is there to catch the work.
Instead of setting MySQL to redis, I will check. Independent would still be interesting for me to know: If there are multiple instances of the same microservice running, each with one worker running on it, is it possible that an unwanted double execution could occur?
Started a new Conversation Are There Any Pitfalls With Same Queues On Different Servers?
I have a situation where a Laravel installation is running on two servers simultaneously and the load is distributed by a load balancer. At the moment I only have queues running on one server, because I'm concerned if this might lead to suboptimal behaviour.
My primary concern is that if I run the same queue on both servers at the same time, it could lead to the situation that the same job is started on both servers. Actually, this shouldn't happen because one server will always be faster or slower and will reserve the job for itself, right?
Apart from that, what other challenges can there be that you should be aware of in such a scenario? Well, actually, if I understood it correctly, my concerns should be unfounded, right?
My queue is handled by a MySQL database.