⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Conversation

@TimWolla
Copy link
Member

For:

<?php

function plus1($x) {
	return $x + 1;
}

$array = array_fill(0, 100, 1);

$count = 0;
for ($i = 0; $i < 100_000; $i++) {
	$count += count(array_map(plus1(...), $array));
}

var_dump($count);

This is ~1.1× faster:

Benchmark 1: /tmp/test/before -d opcache.enable_cli=1 /tmp/test/test6.php
  Time (mean ± σ):     172.2 ms ±   0.5 ms    [User: 167.8 ms, System: 4.2 ms]
  Range (min … max):   171.6 ms … 173.1 ms    17 runs

Benchmark 2: /tmp/test/after -d opcache.enable_cli=1 /tmp/test/test6.php
  Time (mean ± σ):     155.1 ms ±   1.3 ms    [User: 150.6 ms, System: 4.2 ms]
  Range (min … max):   154.2 ms … 159.3 ms    18 runs

Summary
  /tmp/test/after -d opcache.enable_cli=1 /tmp/test/test6.php ran
    1.11 ± 0.01 times faster than /tmp/test/before -d opcache.enable_cli=1 /tmp/test/test6.php

With JIT it becomes ~1.7× faster:

Benchmark 1: /tmp/test/before -d opcache.enable_cli=1 -d opcache.jit=tracing /tmp/test/test6.php
  Time (mean ± σ):     166.9 ms ±   0.6 ms    [User: 162.7 ms, System: 4.1 ms]
  Range (min … max):   166.1 ms … 167.9 ms    17 runs

Benchmark 2: /tmp/test/after -d opcache.enable_cli=1 -d opcache.jit=tracing /tmp/test/test6.php
  Time (mean ± σ):      94.5 ms ±   2.7 ms    [User: 90.4 ms, System: 3.9 ms]
  Range (min … max):    92.5 ms … 103.1 ms    31 runs

Summary
  /tmp/test/after -d opcache.enable_cli=1 -d opcache.jit=tracing /tmp/test/test6.php ran
    1.77 ± 0.05 times faster than /tmp/test/before -d opcache.enable_cli=1 -d opcache.jit=tracing /tmp/test/test6.php

@bwoebi
Copy link
Member

bwoebi commented Jan 14, 2026

I like this!
Though, in ZEND_TYPE_ASSERT I'd avoid fetching the function name / the zend_internal_function for non-errors and directly store the expected type (e.g. in the lower 16 bits of extended_value and put the operand num into the higher 16 bits).


I assume something similar is also possible for Closures passed directly as arg, like array_map(fn($x) => $x + 1, $array) and inline the closure code itself directly (as long as it doesn't create new variables), which probably is a much more common scenario?

@bwoebi
Copy link
Member

bwoebi commented Jan 14, 2026

On that note, I wonder whether it would make sense to expose this as API, add a function pointer on zend_internal_function, and whenever a function is encountered during compilation and it has this function pointer, it's called with the ast of its arguments and can emit opcodes by itself (or just return false and normal compilation happens). Rather than centralizing this in compiler (the file is big enough :-P, and it would make it extensible; extensions could play around with this too).

@TimWolla
Copy link
Member Author

I wonder whether it would make sense to expose this as API,

It probably would for all the reasons that you mentioned.

@TimWolla
Copy link
Member Author

TimWolla commented Jan 14, 2026

I assume something similar is also possible for Closures passed directly as arg, like array_map(fn($x) => $x + 1, $array) and inline the closure code itself directly (as long as it doesn't create new variables), which probably is a much more common scenario?

I assume getting scoping right is getting complicated quickly. Even preserving the Closure and compiling it as:

$c = fn ($x) => $x + 1;
foreach ($array as $key => $val) $result[$key] = $c($val);

is not immediately obvious to me that it is safe (e.g. with regard to variable capturing and scoping).


I've opted to support only CALLABLE_CONVERT for now, since those should already be pretty useful once PFA lands and they don't come with the concerns above.

@TimWolla
Copy link
Member Author

Though, in ZEND_TYPE_ASSERT I'd avoid fetching the function name / the zend_internal_function for non-errors and directly store the expected type (e.g. in the lower 16 bits of extended_value and put the operand num into the higher 16 bits).

Unclear if this did something for performance, but done.

…nto foreach

For:

    <?php

    function plus1($x) {
    	return $x + 1;
    }

    $array = array_fill(0, 100, 1);

    $count = 0;
    for ($i = 0; $i < 100_000; $i++) {
    	$count += count(array_map(plus1(...), $array));
    }

    var_dump($count);

This is ~1.1× faster:

    Benchmark 1: /tmp/test/before -d opcache.enable_cli=1 /tmp/test/test6.php
      Time (mean ± σ):     172.2 ms ±   0.5 ms    [User: 167.8 ms, System: 4.2 ms]
      Range (min … max):   171.6 ms … 173.1 ms    17 runs

    Benchmark 2: /tmp/test/after -d opcache.enable_cli=1 /tmp/test/test6.php
      Time (mean ± σ):     155.1 ms ±   1.3 ms    [User: 150.6 ms, System: 4.2 ms]
      Range (min … max):   154.2 ms … 159.3 ms    18 runs

    Summary
      /tmp/test/after -d opcache.enable_cli=1 /tmp/test/test6.php ran
        1.11 ± 0.01 times faster than /tmp/test/before -d opcache.enable_cli=1 /tmp/test/test6.php

With JIT it becomes ~1.7× faster:

    Benchmark 1: /tmp/test/before -d opcache.enable_cli=1 -d opcache.jit=tracing /tmp/test/test6.php
      Time (mean ± σ):     166.9 ms ±   0.6 ms    [User: 162.7 ms, System: 4.1 ms]
      Range (min … max):   166.1 ms … 167.9 ms    17 runs

    Benchmark 2: /tmp/test/after -d opcache.enable_cli=1 -d opcache.jit=tracing /tmp/test/test6.php
      Time (mean ± σ):      94.5 ms ±   2.7 ms    [User: 90.4 ms, System: 3.9 ms]
      Range (min … max):    92.5 ms … 103.1 ms    31 runs

    Summary
      /tmp/test/after -d opcache.enable_cli=1 -d opcache.jit=tracing /tmp/test/test6.php ran
        1.77 ± 0.05 times faster than /tmp/test/before -d opcache.enable_cli=1 -d opcache.jit=tracing /tmp/test/test6.php
@staabm
Copy link
Contributor

staabm commented Jan 14, 2026

Will this also work for

array_map(
  function($ar) { return $ar + 1; }, 
  $array
)

?

@TimWolla
Copy link
Member Author

Will this also work for

No(t with this PR). This is what Bob mentioned in his first comment below the line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants