We are now ready to write the benchmark code. Take a look at Example 13-27.
use strict; use Benchmark; use Book::Factorial ( ); my $top = 100; timethese(300_000, { recursive_perl => sub {Book::Factorial::factorial_recursive_perl($top)}, iterative_perl => sub {Book::Factorial::factorial_iterative_perl($top)}, recursive_c => sub {Book::Factorial::factorial_recursive_c($top) }, iterative_c => sub {Book::Factorial::factorial_iterative_c($top) }, });
As you can see, this looks just like normal Perl code. The Book::Factorial module is loaded (assuming that you have installed it system-wide) and its functions are used in the test.
We showed and analyzed the results at the beginning of our discussion, but we will repeat the results here for the sake of completeness:
panic% ./factorial_benchmark.pl Benchmark: timing 300000 iterations of iterative_c, iterative_perl, recursive_c, recursive_perl... iterative_c: 0 wallclock secs ( 0.47 usr + 0.00 sys = 0.47 CPU) recursive_c: 2 wallclock secs ( 1.15 usr + 0.00 sys = 1.15 CPU) iterative_perl: 28 wallclock secs (26.34 usr + 0.00 sys = 26.34 CPU) recursive_perl: 75 wallclock secs (74.64 usr + 0.11 sys = 74.75 CPU)
If you want to do the benchmarking after the module has been tested but before it's installed, you can use the blib pragma in the build directory:
panic% perl -Mblib factorial_benchmark.pl
 
Continue to: