Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

kleninmaxim's avatar

Improving the process of storing and updating data on the server with a large number of simultaneous processes.

I have hundreds of processes on one server. In the future, I want 1000 processes, or even 10000. Each process is connected to websocket and it receives data and updates the mysql table with this data (There are three fields in mysql in total: name, body, updated_at). Name - the name of the process, body - incoming data in json format, updated_at - the time of data update.

Data on average comes in about 0.1 seconds, rarely the time reaches 1 second and a couple of processes the websocket data comes in more than 100 seconds and they update the body and updated_at fields by the name field. One process - one name field.

The fact is that already with 100 processes an error occurs several times a day and all processes are restarted: PHP Fatal error: Uncaught PDOException: SQLSTATE[HY000]: General error: 2006 MySQL server has gone away in. mysql settings: 1) max_allowed_packet = 500M, 2) [PDO::ATTR_PERSISTENT => true].

Please tell me if there is a way to improve this structure or through what better tool (for example, not mysql, but something else) to put into practice the storage, updating and collection of data with hundreds or thousands of processes, so that it is much faster and more reliable. The server that is being used: 2 vCPUs, 2 GB memory. Server is not powerfull, but I think if we improve the process above, which I described, it will be possible to run 1000 processes on such a weak server.

Here is a simplified version of the code. And there are about 100 such php scripts running:

<?php

use WebSocket\Client;

require dirname(__DIR__) . '/vendor/autoload.php';

if (!isset($argv[1])) die('Give right arguments!' . PHP_EOL);

$parameter = $argv[1];

// connection to socket
$client = new Client(
	'wss://websocket.com:9999/' . $parameter,
	['timeout' => 200]
);

// connection to db
try {

	$connect = new PDO(
		'mysql:host=' . MYSQL_HOST . ';port=' . MYSQL_PORT . ';dbname=' . MYSQL_DB . '',
		MYSQL_USER,
		MYSQL_PASSWORD,
		[PDO::ATTR_PERSISTENT => true]
	);

	$connect->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

} catch (PDOException $e) {

	echo '[ERROR]' . $e->getMessage() . PHP_EOL;

	throw new Exception();

}

while(true) {

    $body = json_decode($client->receive(), true); // wait, when data will get
	
	$sth = $connect->prepare(
		/** @lang sql */ 
		"UPDATE `table` SET `body` = :body, `updated_at` = :updated_at WHERE `name` = :name"
	);

	$sth->execute([
		'name' => $parameter,
		'body' => json_encode($body),
		'updated_at' => date('Y-m-d H:i:s')
	]);

}
0 likes
8 replies
Tray2's avatar

Sounds to me like you run DDOS on your database server. So no wonder it goes down. Either put some throttling on the incoming connections or look at clustering several mysql servers.

kleninmaxim's avatar

@Tray2 Thanks for the answer, yes, it does look painful for mysql, but limiting connections is not suitable. Of course, I can create several mysql servers, increase server capacity, etc. But I would like to optimize more so that it works at low capacities. I do not insist on mysql for this purpose, if you can take a better and more reliable tool, you can use it, the main thing is that it can withstand many processes.

Sinnbeck's avatar

Maybe show the code that stores the data?

kleninmaxim's avatar

@Sinnbeck Here's just a normal request and there are several hundred of them, it is executed when data arrives in one of the socket requests. That is, the first socket is waiting, the second socket is waiting, the third is waiting. Maybe at the same time, or maybe not, data came over the socket and a mysql update request was executed. The data has already been entered so that it can be updated.

$sth = self::$connect->prepare(
    "UPDATE `table` SET `body` = :body, `updated_at` = :updated_at WHERE `name` = :name"
);

$sth->execute([
    'name' => $name,
    'body' => json_encode($body),
    'updated_at' => date('Y-m-d H:i:s')
]);
kleninmaxim's avatar

@Tray2 My sql for this table is (only a different name for the table and columns):

$sth = self::$connect->prepare(/** @lang sql */ 
            "CREATE TABLE IF NOT EXISTS `table` (
                `id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
                `name` VARCHAR(45) NOT NULL,
                `body` JSON NULL,
                `status` TINYINT NULL DEFAULT 0,
                `updated_at` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
                PRIMARY KEY (`id`),
                UNIQUE INDEX `id_UNIQUE` (`id` ASC),
                UNIQUE INDEX `name_UNIQUE` (`name` ASC)
            );
        ");

Request SELECT * FROM table; is executed for 0.094 sec (In table 110 row).

I executed this request remotely. Requests in script are executed locally.

Snapey's avatar

You could easily have a slow insert process if you need to check for duplicates or have complex or no indexes.

I would certainly focus on optimising the sql write.

Also consider adding the data to an in memory queue with a separate process responsible for persisting the data, for which it could use batch inserts of say 100 records in a single request.

kleninmaxim's avatar

@Snapey Thanks for the answer. I don't know how to optimize a simple update and select (see the code above the previous answer. In another process, in addition to update, a simple select ... limit 1 is done every second.).

Perhaps a good option is to save to the memory queue, but I don't need a serious delay in updating the table, data came via socket, and it waits in memory for the rest and updates the mysql table in 3 seconds, but with one request.

Perhaps there is an option to save the same thing only in redis or another tool?

Please or to participate in this conversation.