Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

m7vm7v's avatar
Level 51

Unexpected Performance Anomalies Across Dev, QA, and Prod Environments

I'm encountering unexpected performance issues when running tests on three different environments (Dev, QA, and Prod). I expected the production server to be the fastest, followed by QA, and then the Dev machine. However, my tests reveal the opposite behavior: QA is about 10 times slower than Dev, and Prod is twice as slow as QA. I'm running a series of tests using php artisan tinker with database queries, PHP computations, and Laravel route dispatches. All servers have different configurations, and I am using the following setup:

Dev: Vagrant (Ubuntu 24, PHP 8.3, Laravel 11, 2GB RAM, local MySQL) QA: AWS t2.micro (Ubuntu 22, PHP 8.3, Laravel 11, 1 CPU, 1GB RAM) with RDS db.t3.micro Prod: AWS t3a.medium (Ubuntu 22, PHP 8.1, Laravel 9, 2 CPUs, 2GB RAM) with RDS db.t4g.small The performance function is consistent, but the results are perplexing. I would appreciate any advice on what might be causing these discrepancies, and why Prod and QA are performing much slower than Dev.

function testPerformance($callback, $tries = 10) {
    $totalTime = 0;

    for ($i = 0; $i < $tries; $i++) {
        $startTime = microtime(true);

        $callback();

        $endTime = microtime(true);
        $executionTime = $endTime - $startTime;
        $totalTime += $executionTime;
    }

    return $totalTime / $tries;
}

testPerformance(fn () => DB::table('users')->where('id', 2)->get(),100000)

DEV 0.00013900178909302

QA 0.0010569172644615 0.001080708463192

Prod 0.0019645609879494 0.0019414114141464

testPerformance(fn () => array_sum(range(1, 100000)), 100000)

DEV 0.00032799326896667

QA 0.0011941353178024 0.0011884169006348

Prod 0.0027074522924423 0.0027898060584068

testPerformance(fn () => Route::dispatch(Request::create('/login', 'GET')), 10000)

DEV 0.00050271918773651

QA 0.0059862355232239 0.0061600740671158

Prod 0.0088438971757889 0.0086923565387726

Would you say the tests are not relevant and wouldn't be considered as 'performance' tests? Could it be the servers being misconfigured properly (I am using ansible playbooks so thay are setup identically)? Any advice would be much appreciated! Many thanks.

0 likes
1 reply
LaryAI's avatar
Level 58

The performance discrepancies you're experiencing across different environments can be attributed to several factors. Here are some potential causes and solutions to investigate:

1. Hardware and Instance Types

  • Dev: Vagrant (local machine) with 2GB RAM and local MySQL.
  • QA: AWS t2.micro with 1 CPU, 1GB RAM, and RDS db.t3.micro.
  • Prod: AWS t3a.medium with 2 CPUs, 2GB RAM, and RDS db.t4g.small.

The differences in hardware and instance types can significantly impact performance. For example, the t2.micro and t3a.medium instances have different CPU credits and burst capabilities, which can affect performance under load.

2. Database Latency

  • Dev: Local MySQL database.
  • QA and Prod: AWS RDS instances.

Network latency between your application server and the RDS instance can introduce delays. Ensure that the RDS instances are in the same region as your application servers to minimize latency.

3. PHP and Laravel Versions

  • Dev and QA: PHP 8.3, Laravel 11.
  • Prod: PHP 8.1, Laravel 9.

Different PHP and Laravel versions can have performance implications. PHP 8.3 and Laravel 11 might have optimizations that are not present in PHP 8.1 and Laravel 9.

4. Configuration Differences

Even though you are using Ansible playbooks, there might be subtle configuration differences. Ensure that:

  • All environments have the same PHP configurations (e.g., opcache, memory_limit).
  • Database configurations (e.g., connection pooling, query caching) are consistent.

5. Resource Limits and Throttling

AWS instances, especially smaller ones like t2.micro, can be subject to CPU throttling. Check the CloudWatch metrics for CPU credits and throttling.

6. Benchmarking Methodology

The benchmarking function you provided is a good start, but consider the following improvements:

  • Warm-up Runs: Perform a few warm-up runs before measuring to account for JIT compilation and caching.
  • Isolate Tests: Run each test in isolation to avoid interference from other processes.

Improved Benchmarking Function

Here's an improved version of your benchmarking function:

function testPerformance($callback, $tries = 10) {
    $totalTime = 0;

    // Warm-up runs
    for ($i = 0; $i < 5; $i++) {
        $callback();
    }

    for ($i = 0; $i < $tries; $i++) {
        $startTime = microtime(true);

        $callback();

        $endTime = microtime(true);
        $executionTime = $endTime - $startTime;
        $totalTime += $executionTime;
    }

    return $totalTime / $tries;
}

Example Usage

echo testPerformance(fn () => DB::table('users')->where('id', 2)->get(), 100000);
echo testPerformance(fn () => array_sum(range(1, 100000)), 100000);
echo testPerformance(fn () => Route::dispatch(Request::create('/login', 'GET')), 10000);

Additional Recommendations

  1. Profiling Tools: Use profiling tools like Blackfire or Xdebug to identify bottlenecks.
  2. Database Indexes: Ensure that your database queries are optimized and that appropriate indexes are in place.
  3. Caching: Implement caching strategies (e.g., query caching, route caching) to improve performance.

By systematically addressing these areas, you should be able to identify and mitigate the performance issues across your environments.

Please or to participate in this conversation.