Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

AffData's avatar

Reverb started returning 500 Internal Server Error

Everything was working well for almost a year now, then suddenly every 1-2 hours it would go for down until I restart the daemon, then we have a few more hours and it goes down again.

What's interesting is that the server itself is cold (we use a dedicated server for reverb). It has a lot of resources left.

It all started with Pusher error: cURL error 35: OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to domain:443, so I went ahead and activated the Cloudflare proxy and now every now an then we start getting batches of these 500 server errors.

What changed is our traffic, we're getting 10x the traffic nowadays than we used to get so maybe load is the reason.

Log from the queue server which doesn't seem to be able to send events to the Reverb server until the daemon is restarted:

. {"exception":"[object] (Illuminate\\Broadcasting\\BroadcastException(code: 0): Pusher error: <html>
May 27 12:32:37 PM <head><title>500 Internal Server Error</title></head>
May 27 12:32:37 PM <body>
May 27 12:32:37 PM <center><h1>500 Internal Server Error</h1></center>
May 27 12:32:37 PM <hr><center>nginx</center>
May 27 12:32:37 PM </body>
May 27 12:32:37 PM </html>

I have noticed that there's a fix in changelog which says: [1.x] Adds support for reconnecting to Redis if disconnected by server by @joedixon in https://github.com/laravel/reverb/pull/281

So I went on to upgrade to the latest 1.5.0 version but it didn't help.

Any ideas?

0 likes
2 replies
LaryAI's avatar

Certainly! Here’s a structured approach to diagnosing and resolving your issue with Laravel Reverb returning 500 Internal Server Errors, especially under increased traffic:


1. Root Cause Analysis

From your description, the issue started after a significant increase in traffic and is accompanied by:

  • Pusher error: cURL error 35: OpenSSL SSL_connect: SSL_ERROR_SYSCALL
  • 500 Internal Server Error responses from nginx
  • The need to restart the Reverb daemon to temporarily resolve the issue

This suggests a resource exhaustion or connection handling issue, possibly related to:

  • Too many open connections (file descriptors)
  • Exhausted Redis connections
  • Nginx or PHP-FPM limits
  • Cloudflare proxy interference

2. Immediate Steps

A. Check System Resource Limits

Check the number of open files allowed for your Reverb process:

ulimit -n

If this is low (e.g., 1024), increase it in your systemd service or shell:

ulimit -n 65535

Or, for a systemd service, add to your service file:

[Service]
LimitNOFILE=65535

B. Monitor Redis Connections

If Reverb uses Redis for broadcasting, monitor the number of connections:

redis-cli info clients

If you see a high number of connections, you may need to:

  • Increase maxclients in your Redis config
  • Ensure Reverb is properly closing connections (the reconnect fix you mentioned is relevant, but if you’re on 1.5.0, it should be included)

C. Check Nginx and PHP-FPM Limits

Check your nginx error logs for more details:

tail -f /var/log/nginx/error.log

Increase worker connections in nginx:

worker_connections 4096;

And in PHP-FPM, increase pm.max_children and related settings.

D. Cloudflare Proxy

Cloudflare can sometimes block or rate-limit websocket or API traffic. Try temporarily disabling the Cloudflare proxy (orange cloud) for your Reverb subdomain and see if the issue persists.


3. Reverb Configuration

Ensure your Reverb server is configured for high concurrency. In your config/reverb.php (or wherever your config is):

'max_connections' => 10000, // or higher, depending on your server

Also, make sure you’re running the Reverb daemon with enough workers:

php artisan reverb:start --workers=8

Adjust the number of workers based on your CPU cores.


4. Update and Restart

Since you already upgraded to 1.5.0, make sure you:

  • Cleared any old cache:
    php artisan config:clear
    php artisan cache:clear
    
  • Restarted all related services (Reverb, Redis, nginx, PHP-FPM)

5. Long-Term Monitoring

Set up monitoring for:

  • System resource usage (htop, top)
  • Redis connections
  • Nginx/PHP-FPM logs
  • Reverb logs

This will help you spot trends before the server goes down.


6. Summary Checklist

  • Increase open file limits (ulimit)
  • Monitor and increase Redis max clients if needed
  • Tune nginx and PHP-FPM for higher concurrency
  • Test with Cloudflare proxy off
  • Ensure Reverb config allows enough connections/workers
  • Monitor logs and resources

If the problem persists after all of the above, please share:

  • Output of ulimit -n
  • Redis info clients
  • Relevant nginx and Reverb logs
  • Your Reverb and nginx configuration files

This will help further diagnose the issue.


Let me know if you need help with any specific configuration!

AffData's avatar

The AI gave a good indication. For anyone coming here in the future, here's what helped:

  1. Increase limit for nginx
worker_rlimit_nofile 10000;
worker_connections 10000;
  1. Increased limit for supervisor /usr/lib/systemd/system/supervisor.service
LimitNOFILE=20000

Although the proposed max_connections doesn't exist in Reverb's config nor in the code of the Reverb github repo.

Please or to participate in this conversation.