Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

Ligonsker's avatar

How to test my Soketi server?

I'm not an expert servers person, so my following setup might be wrong: I setup 3 Ubuntu 20.04 Server virtual machines as follows: (And I'm not sure that's enough for that test either because I'm also using php artisan serve which is using PHP's development server, so not sure it's suitable for the test)

  1. 192.168.10.10 - Soketi Server - 2 Cores, 2GB RAM
  2. 192.168.10.11 - Laravel Project Server - 2 Cores, 2GB RAM
  3. 192.168.10.12 - Test Server - 1 Core, 1GB RAM

The Soketi server is the one that handles web sockets on 192.168.10.10

The Laravel server has an endpoint on 192.168.10.11/websocket which opens a connection using Laravel Echo, so every time you hit this endpoint, there's an open web socket connection created.

I tried using wrk. I installed it on the Test server and then ran

// using 2 threads, 400 connections for 30 seconds
sudo wrk -t2 -c400 -d30s http://192.168.10.11/websocket

And it does run, I can see the Laravel server getting all these requests, however the number of connections as seen in Soketi metrics stays on 0. So I'm not sure it acts like a real user. (I even tried adding a keep-alive header for the test:

sudo wrk -t2 -c400 -d30s -H "Connection: Keep-Alive" http://192.168.10.11/websocket

But that didn't do anything either.

So I am not sure if it even benchmarks the Soketi server itself or just the Laravel server

I also tried to use the siege package, but it does the same as wrk

Do I have any options left? Also, do I need to configure more stuff for that? Or my current setup is enough for basic measurements

(For example, every tab I open in the browser of the 192.168.10.11/websocket endpoint, shows up as a new connection in the Soketi metrics on 192.168.10.10:9601/metrics)

0 likes
22 replies
Ligonsker's avatar

@Sinnbeck Gonna try and update! Ty! Btw do you have any idea why wrk is meant to run concurrently yet the Soketi metrics show no connections as if a new tab is opened?

Ligonsker's avatar

@Sinnbeck Update: I am getting an error:

worker error, id: 1 TypeError [Error]: Cannot read properties of undefined (reading 'undefined')

With this basic yml config file:

config:
  target: "http://192.168.10.11"
  phases:
    - duration: 60
      arrivalRate: 5
      name: Warm up
    - duration: 120
      arrivalRate: 5
      rampTo: 50
      name: Ramp up load
    - duration: 600
      arrivalRate: 50
      name: Sustained load
  scenarios:    
      flow:
        - get:
              url: "/"

Any idea why? The indentation looks correct

Sinnbeck's avatar

@Ligonsker not quite sure. I might give it a shot tomorrow when I at a computer. I'll let you know if I can get it to work

1 like
Ligonsker's avatar

@Sinnbeck Ty. Btw I have an update, I managed to make it work with the following config:

config:
  target: "http://192.168.10.11"
  phases:
    - duration: 60
      arrivalRate: 5
      name: Warm up
    - duration: 120
      arrivalRate: 5
      rampTo: 50
      name: Ramp up load
    - duration: 600
      arrivalRate: 50
      name: Sustained load
scenarios:
  - name: "Test"
    flow:
      - get:
          url: "/"
      - think: 5

And my PHP server does get these connections, but it's not increasing the soketi_connected gauge.

Example output from my PHP server:

[Thu Mar 24 19:25:04 2022] 192.168.10.11:53004 Accepted
[Thu Mar 24 19:25:04 2022] 192.168.10.11:53004 Closing
[Thu Mar 24 19:25:04 2022] 192.168.10.11:53006 Accepted
[Thu Mar 24 19:25:04 2022] 192.168.10.11:53006 Closing
[Thu Mar 24 19:25:04 2022] 192.168.10.11:53008 Accepted
[Thu Mar 24 19:25:04 2022] 192.168.10.11:53008 Closing

So it simply does the GET part, but does not get the websocket handshake. I saw they also have a specific web sockets test, but I thought that in my case since this endpoint creates a connection it would work, but I'm wrong

Ligonsker's avatar

@Sinnbeck I changed to:

config:
  target: "ws://192.168.10.10:6001/key"
  phases:
    - duration: 20
      arrivalRate: 10
scenarios:
  - engine: "ws"
    flow:
      - think: 600 # do nothing for 10m and disconnect

Then I got error 404.

So I tried without the key:

config:
  target: "ws://192.168.10.10:6001"
  phases:
    - duration: 20
      arrivalRate: 10
scenarios:
  - engine: "ws"
    flow:
      - think: 600 # do nothing for 10m and disconnect

But I get error 200:

errors.Unexpected server response: 200: ........................................ 200
vusers.created: ................................................................ 200
vusers.created_by_name.0: ...................................................... 200
vusers.failed: ................................................................. 200

So it does find this endpoint, but it does not get the response it expects? (I think it needs 101?). Maybe ws://192.168.10.10:6001/app-key is not the correct format for the key as it sees it as an endpoint?

Could it be that since Soketi is a wrapper, it does not work with Artillery?

Sinnbeck's avatar

@Ligonsker I will try it on my docker set up that uses a proxy to bind it to port 80 and see if that makes a difference :)

1 like
Sinnbeck's avatar
Sinnbeck
Best Answer
Level 102

Just an idea. Open the browser on a page with echo. Open developer tools - > network and set it to ws. Now refresh, and copy the exact url that shows up and use it in artillery

1 like
Ligonsker's avatar

@Sinnbeck Oh my god you are a genius!

I copied:

ws://192.168.10.10:6001/app/my-key

and woah! The metrics started exploding:

# HELP soketi_connected The number of currently connected sockets.
# TYPE soketi_connected gauge
soketi_connected{app_id="app-id",port="6001"} 143 // and going up!

So you were right in the first place, I was just missing /app in the url

Btw, what did they mean by 100K requests per second? Simply sending 100K web socket messages, which in my case means dispatching the broadcast event 100K/second? (In total, to all users connected)

Sinnbeck's avatar

@Ligonsker happy to help. Where do you see 100k requests? In the docs or the guide?

1 like
Sinnbeck's avatar

@Ligonsker They first test with the c++ version and then say that it might be unfair to compare c++ to node.. And then they do the same test with the js version (the thing soketi is build upon), and then get around 8.5x the speed of Fastify

This test pitted a native C++ server against a scripted Node.js one, so of course the outcome was given. Well, not really. If I run the same test using µWebSockets.js for Node.js, the numbers are a stable 75k req/sec (for cleartext). That’s still 8.5x that of Node.js with Fastify and just shy of 75% of µWebSockets itself.

1 like
Ligonsker's avatar

@Sinnbeck Yes but how did they perform 100K requests? What requests? What payload? Because thanks to you I managed to do the first part of the test which is creating 500 concurrent connections. But now for the requests part (75K and not 100K in my case then), how to do that? Is it also done with Artillery? Or that's another part?

Ligonsker's avatar

@Sinnbeck But when they say 75K http requests per second, do they necessarily mean 75K users? Or perhaps less users who do a few actions per second, and maybe even some actions also trigger other http requests? nonetheless it's still plenty of requests. Of course I do not have my own app right now, not to mention 100K requests :D, but I am experimenting because it helps me learn how it works and how it integrates with the code. So I like to see the whole picture, and not just the coding part.

For example, where I work (And we have an expert there :D), we had a problem of uploading files where tens of thousands of rows were needed to be written to the database and it took way too long, like plenty of minutes, then we realized it's because a "heavy" Eloquent query was used to write large number of rows. When we changed to an efficient Query Builder and Raw SQL query, it drastically lowered the time and compute power and in turn the costs.

So it made me want to understand how the entire system works, not just the coding part

Sinnbeck's avatar

@Ligonsker My guess is that their test is a "See how awesome our product is" :) And the 100K I would guess is users sending requests at the same time (if I understand it correctly). That is at least how it works for regular http request. I am still kinda new to websockets, so I could of course be wrong :)

1 like
Ligonsker's avatar

@Sinnbeck So in my case it's not a good test scenario because I have two separate VMs for the web sockets and for the PHP server, and if I wanted to do a similar test I needed to install Soketi on the same VM of the PHP server and then run both Artillery test that connects 200 web socket connections concurrently and also bombard it with tens of thousands of HTTP requests per second on the PHP server and see the VM's stats?

Sinnbeck's avatar

@Ligonsker I think you can test it like that.

First check the "Testing a real life sample" in the guide. That should help test out listeners

And then try and bombard the server with messages from laravel broadcasting. I dont think you can benchmark this as such, but you can see how it responds at least.

1 like

Please or to participate in this conversation.