Advent 2023: Forms – Matthew Weier O’Phinney

The first thing I was tasked with after I moved full time to the Zend Framework team (17 years ago! Yikes!) was to create a forms library.
Like all the work I did for ZF in the early days, I first created a working group, gathered requirements, and prioritized features.
There were a lot of requests:

  • Ability to normalize values
  • Ability to validate values
  • Ability to get validation error messages
  • Ability to render html forms, and have customizable markup
  • Ability to do nested values
  • Ability to handle optional values
  • Ability to report missing values

and quite a lot more.
But those are some of the things that stuck out that I can remember off the top of my head.

Zend_Form was considered a big enough new feature that we actually bumped the version from 1.0 to 1.5 to call it out.

And, honestly, in hindsight, it was a mistake.

A mistake?

Considering the timeframe when I was developing Zend_Form, it was actually a good effort, and it’s still one of those features that folks tell me sold them on the framework.
But within a year or two, I was able to see some of the drawbacks.

I first realized the issues when we started integrating the Dojo Toolkit with ZF.
We ended up having to create first a suite of Dojo-specific form elements, and second a whole bnch of Dojo-specific decorators, which were what we used to render form elements.
While the library gave us this flexibility, I saw a few issues:

  • Duplication.
    We had multiple versions of the same form elements, and it was actually possible to get the wrong version for your form context.
    And with duplication comes increased maintenance: any time we fixed an issue in one element, we had to check to see if the same issue existed with the Dojo versions, and fix them there as well.
  • Javascript.
    One of the reasons for integrating Dojo was to allow doing fun things like client-side validation; this allowed giving early feedback, without a round-trip to the server.
    But this also meant that we had validation logic duplicated between the server-side and client-side logic.
    And more interestingly: the form might be sent as a request by javascript, instead of a standard form request, which meant that we needed to validate it only, and then serialize validation status and messages.
    Basically, all the rendering aspects of the form were irrelevant in this scenario.
    Which brings me to…
  • APIs.
    Around this time, APIs started trending.
    It would be a few years before REST became popular and commonly understood by developers, but folks were starting to see that we’d be needing them for the nascent mobile application markets, and that they were going to be a useful way to conduct business-to-business transactions.
    Once you start having APIs in the mix, a library centered on web forms becomes less interesting.

By the time we started planning for version 2 of ZF, we realized we’d need to reconsider how we did forms.
The first step we took was splitting the validation aspect from the form aspect, and created Zend\InputFilter to address the first, and Zend\Form to address the second.
Input filters encapsulated how to filter, normalize, and validate incoming data.
Forms composed an input filter, and then provided hints for the view layer to allow rendering the elements.
This separation helped a fair bit: you could re-use input filters for handling API or JS requests easily, while the form layer helped with rendering html forms.

But I still feel we didn’t get it right:

  • Our validation component and our input filter component were each stateful.
    When you performed validation, each would store the values, validation status, and validation messages as part of the state.
    This makes re-use within the same request more difficult (it was not uncommon to use the same validator with multiple elements, and this now required multiple instances), makes testing more difficult, and makes it harder to understand if the instance represents the definition, or the results of validation.
  • The longer I’ve worked in web development, the more I’ve realized that while the html generation aspects of these form libraries are useful for prototyping, they inevitably cannot be used for the final production code.
    Designers, user experience experts, and accessibility developers will each want different features represented, and these will never fall into the defaults the framework provides.
    Even if the framework provides customization features, the end result is more programming effort.
    It’s almost always better to code the html markup in your templates, and then feed state (e.g., element IDs

Truncated by Planet PHP, read more at the original (another 7939 bytes)

Cutting through the static – Larry Garfield

Cutting through the static

Static methods and properties have a storied and controversial history in PHP. Some love them, some hate them, some love having something to fight about (naturally).

In practice, I find them useful in very narrow situations. They’re not common, but they do exist. Today, I want to go over some guidelines on when PHP developers should, and shouldn’t, use statics.

In full transparency, I will say that the views expressed here are not universal within the PHP community. They do, however, represent what I believe to be the substantial majority opinion, especially among those who are well-versed in automated testing.

Continue reading this post on PeakD.

Larry
29 November 2023 – 4:28pm

Automating the backslash prefixing for native PHP function calls – Raphael Stolt

After reading the blog post Why does a backslash prefix improve PHP function call performance by Jeroen Deviaene I was looking for a way to automate it for the codebase of the Lean Package Validator, to shave off some miliseconds for it’s CLI. The PHP Coding Standards Fixer has a rule named native_function_invocation which does the very exact task.

Configuring the PHP Coding Standards Fixer

.php-cs-fixer.php
<?php use PhpCsFixer\Config;
use PhpCsFixer\Finder; $finder = Finder::create() ->in([__DIR__, __DIR__ . DIRECTORY_SEPARATOR . 'tests']); $rules = [ 'psr_autoloading' => false, '@PSR2' => true, 'phpdoc_order' => true, 'ordered_imports' => true, 'native_function_invocation' => [ 'include' => ['@internal'], 'exclude' => ['file_put_contents'] ]
]; $cacheDir = \getenv('HOME') ? \getenv('HOME') : __DIR__; $config = new Config(); return $config->setRules($rules) ->setFinder($finder) ->setCacheFile($cacheDir . '/.php-cs-fixer.cache');

To make this rule executeable I needed to add the –allow-risky=yes option to the PHP Coding Standards Fixer calls in the two dedicated Composer scripts shown next.

composer.json
"scripts": { "lpv:test": "phpunit", "lpv:test-with-coverage": "export XDEBUG_MODE=coverage && phpunit --coverage-html coverage-reports", "lpv:cs-fix": "php-cs-fixer --allow-risky=yes fix . -vv || true", "lpv:cs-lint": "php-cs-fixer fix --diff --stop-on-violation --verbose --dry-run --allow-risky=yes", "lpv:configure-commit-template": "git config --add commit.template .gitmessage", "lpv:application-version-guard": "php bin/application-version --verify-tag-match=bin", "lpv:application-phar-version-guard": "php bin/application-version --verify-tag-match=phar", "lpv:static-analyse": "phpstan analyse --configuration phpstan.neon.dist", "lpv:validate-gitattributes": "bin/lean-package-validator validate"
},

After running the lpv:cs-fix Composer script the first time the test of the system under test started failing due to file_put_contents being prefixed with a backslash, so I had to exclude it as shown in the PHP Coding Standards Fixer configuration above.

Using JSX on the server as a template engine – Evert Pot

The React/Next.js ecosystem is spinning out of control in terms of magic and complexity.
The stack has failed to stay focused and simple, and it’s my belief
that software stacks that are too complex and magical must eventually fail,
because as sensibilities around software design change they will be unable to
adapt to those changes without cannibalizing their existing userbase.

So while React/Next.js may be relegated to the enterprise and legacy systems in
a few years, they completely transformed front-end development and created ripple
effects in many other technologies. One of many great ideas stemming from this
stack is JSX. I think JSX has a chance to stay relevant and useful beyond
React/Next.

One of it’s use-cases is for server-side templating. I’ve been using JSX as a
template engine to replace template engines like EJS and
Handlebars, and more than once people were surprised this was possible
without bulky frameworks such as Next.js.

So in this article I’m digging into what JSX is, where it comes from and how one
might go about using it as a simple server-side html template engine.

What is JSX?

JSX is an extension to the Javascript language, and was introduced with React.
It usually has a .jsx extension and it needs to be compiled to Javascript.
Most build tools people already use, like ESbuild, Babel, Vite, etc. all
support this natively or through a plugin.
Typescript also natively supports it, so if you use Typescript you can just start
using it without adding another tool.

It looks like this:

const foo = <div> <h1>Hello world!</h1> <p>Sup</p>
</div>;

As you can see here, some html is directly embedded into Javascript, without
quotes. It’s all syntax. It lets you use the full power of Javascript, such
as variables and loops:

const user = 'Evert';
const todos = [ 'Wash clothes', 'Do dishes',
]; const foo = <div> <h1>Hello {evert}</div> <ul> {todos.map( todo => <li>todo</li>)} </ul>
</div>;

It has a convention to treat tags that start with a lowercase character such
as <h1> as output, but if the tag starts with an uppercase character,
it’s a component, which usually is represented by a function:

function HelloWorldComponent(props) { 

Truncated by Planet PHP, read more at the original (another 32023 bytes)

Announcing Crell/Serde 1.0.0 – Larry Garfield

Announcing Crell/Serde 1.0.0

Submitted by Larry on 9 November 2023 – 7:39pm

I am pleased to announce that the trio of libraries I built while at TYPO3 have now reached a fully stable release. In particular, Crell/Serde is now the most robust, powerful, and performant serialization library available for PHP today!

Serde is inspired by the Rust library of the same name, and driven almost entirely by PHP Attributes, with entirely pure-function object-oriented code. It’s easy to configure, easy to use, and rock solid.

Xdebug Update: October 2023 – Derick Rethans

Xdebug Update: October 2023

In this monthly update I explain what happened with Xdebug development in the past month. These are normally published on the first Tuesday on or after the 5th of each month.

Patreon and GitHub supporters will get it earlier, around the first of each month.

You can become a patron or support me through GitHub Sponsors. I am currently 35% towards my $2,500 per month goal, which is set to allow continued maintenance of Xdebug.

If you are leading a team or company, then it is also possible to support Xdebug through a subscription.

In the last month, I spend around 32 hours on Xdebug, with 25 hours funded.

Towards Xdebug 3.3

In last month’s update I explained that I was investigating whether Xdebug can make use of PHP’s Observer API. It turns out that it can be used to intercept function calls, but only include or require calls if the included files contain code, and not just class definitions. As Xdebug treats include and friends as actual function calls, I can unfortunately not solely rely on the Observer API.

In the wake of checking out the Observer API, I also thought I should have a look at some performance improvements. For example, I noticed that Xdebug would always collect local variables with each function call. This is only really needed when showing local variables, in stack traces, or through the step debugger.

Another optimisation that I worked on was to optimise the way how function breakpoints are checked against. These breakpoints trigger when a function gets called, or returned from. This is not a feature that many people often use, but Xdebug would always do some work to be able to compare the configured breakpoints against a normalised function name reference.

These two optimisations together resulted in a 20% reduction in CPU instructions (roughly equivalent to execution time) with the front page of WordPress’ demo site.

The third optimisation that I worked is related to file/line breakpoints. Xdebug would evaluate whether an IDE has set a line breakpoint on the current line. For this, it had to loop over all the existing breakpoints and compare them. Each additional breakpoint would be checked after every statement, meaning that the number of breakpoints affected the running time of the script.

My optimisation alleviates this by moving the check on whether line breakpoints exist for a function or method to the function call itself. If no breakpoints are set in the whole function, then Xdebug skips the check for line breakpoints after each statement. This shifts the factor of performance loss for having line breakpoints from the number of statements to the number of function calls. This shift results in a roughly 25% performance boost with only four line breakpoints enabled.

After attending IPC and speaking to fellow Xdebug users, a question came up about long running scripts. Right now, Xdebug’s step debugger can only be activated when the script starts or by calling xdebug_connect_to_client(). Breakpoints can also only be configured when Xdebug is waiting for a command to continue a script (after a step, an existing breakpoint, or at the start of the script). While a script is running, you can not interrupt the execution to break, or add new breakpoints.

This let me to experiment with a control socket, currently only available on Linux. Through this socket you can then ask Xdebug for information, or request a breakpoint so that you can then use your IDE to add more breakpoints, or inspect the current state.

At the moment, I have implemented the “show me some information” feature, which allows me to show the running PHP scripts, with PID, memory usage (in kb), running time, and Xdebug version. The xdebug command line tool allo

Truncated by Planet PHP, read more at the original (another 2513 bytes)