Technical debt is over-used – Larry Garfield

Technical debt is over-used

Submitted by Larry on 22 May 2023 – 6:26pm

The term “technical debt” gets thrown around a lot. Way too much, in fact. Part of that is because it has become a euphemism for “code I don’t like” or “code that predates me.” While there are reasons to dislike such code (both good and bad), that’s not what the term “technical debt” was invented to refer to.

So what does it mean? There’s several different kinds of “problematic code,” all of which come from different places.

Continue reading this post on PeakD.

Xdebug at 21 – Derick Rethans

Xdebug at 21

Today Xdebug turned 21.

Over these last twenty-one years Xdebug grew from a little hack to make sure PHP wouldn’t segfault when having infinite recursion, to a tool that is used by tens of thousands, if not hundreds of thousands, PHP developers.

This has mostly been the work by myself, with very few external contributions. That makes sense, as it hard to both understand the PHP engine well enough, as well as programming in C. Especially because lots of PHP internals are not actually documented.

Xdebug was developed on a mostly voluntary basis, with more recently through some sponsorship via Patreon and GitHub Sponsors, and through funding from Pro and Business supporters.

Beyond a brief period in early 2020 when I was rewriting Xdebug to version 3, this amounts to funding for about 25 hours a month, with a steady decline.

Twenty-five hours a month is about the minimum needed to maintain Xdebug for newer versions of PHP, including support for new features, as well as triaging and fixing bugs.

If you have been following my monthly reports, you probably have noticed that there is less activity, including in creating the reports and work logs.

But there are plenty of things that should be done, and several that would make Xdebug even more powerful in streamlining debugging and improving your applications. These are also things I would like to work on.

Current features that (in my opinion) need improvement are:

Profiling

The profiler is old code, and fairly messy. It is only possible to start the profiler for the whole of the request, and not only a part of it.

There are also bugs with cycle detection (function A calls function B calls function A) that need investigating.

It should be rewritten, which is luckily easier to do after Xdebug’s new modes architecture.

Code Coverage

The current code coverage feature keeps static information about which functions have lines and paths in the same data structure as the dynamic data collection that is recorded when the script runs. This causes problems.

I have a fix, but it slows down coverage by 50%. Before I can merge it, that needs addressing.

Among the new features that I like to contribute to Xdebug are:

Native Path Mappings

Currently Xdebug does not map paths between local files, and remote and/or generated files. There are some frameworks which rewrite developer-written-classes to a new version with additional functionality, but as a different file name.

Similarly Xdebug can not translate between local and remote paths, which causes confusion such as in this PhpStorm ticket.

Introducing native path mapping would address both these issues.

An implementation could also make it possible to debug generated PHP files, say from templates. Although PhpStorm has some capabilities for this for Twig and Blade templates, it is not supported for other template systems.

A native implementation in Xdebug also could make this working better and faster for what PhpStorm already does.

Time Travel Debugging

My tongue-in-cheek April Fools’ post hinted at this already.

Time Travel Debugging would allow somebody that runs a PHP request (either you as developer, or a customer running your on-premise application) to record its whole execution, including all intermediate states (variable contents, etc.), and source files.

By having this available in one file, a wrapper could then play this back pretending it was running a live request, through already existing debugging front-ends, such as the one in PhpStorm, the PHP Debug Adapter for Visual Studio Code,

Truncated by Planet PHP, read more at the original (another 1745 bytes)

Does OAuth2 have a usability problem? (yes!) – Evert Pot

I read an interesting thread on Hackernews in response to a post:
“Why is OAuth still hard in 2023”. The post and comments bring up a lot
of real issues with OAuth. The article ends with a pitch to
use the author’s product Nango that advertises support
for supporting OAuth2 flows for 90+ APIs and justifying the existence
of the product.

We don’t need 90 browsers to open 90 websites, so why
is this the case with OAuth2? In a similar vain, the popular passport.js
project has 538(!) modules for authenticating with various services,
most of which likely use OAuth2. All of these are NPM packages.

Anyway, I’ve been wanting to write this article for a while. It’s not
a direct response to the Nango article, but it’s a similar take with
a different solution.

My perspective

I’ve been working on an OAuth2 server for a year years now, and last
year I released an open source OAuth2 client.

Since I released the client, I’ve gotten several new features and requests
that were all contributed by users of the library, a few of note are:

  • Allowing client_id and client_secret to be sent in request bodies
    instead of the Authorization header.
  • Allow ‘extra parameters’ to be sent with some OAuth2 flows. Many servers,
    including Auth0 require these.
  • Allow users to add their own HTTP headers, for the same reason.

What these have in common is that there’s a lot of different OAuth2 servers
that want things in a slightly different/specific way.

I kind of expected this. It wasn’t going to be enough to just implement
OAuth2. This library will only work once people start trying it with different
servers and run into mild incompatibilities that this library will have to
add workarounds for.

Although I think OAuth2 is pretty well defined, the full breadth of specs and
implementations makes it so that it’s not enough to (as an API developer) to just
tell your users: “We use OAuth2”.

For the typical case, you might have to tell them something like this:

  • We use OAuth2.
  • We use the authorization_code flow.
  • Your client_id is X.
  • Our ‘token endpoint’ is Y.
  • Our ‘authorization endpoint’ is Z.
  • We require PKCE.
  • Requests to the “token” endpoint require credentials to be sent in a body.
  • Any custom non-standard extensions.

To some extent this is by design. The OAuth2 spec calls itself: “The OAuth 2.0
Authorization Framework”. It’s not saying it is the protocol, but rather it’s
a set of really good building blocks to implement your own authentication.

But for users that want to use generic OAuth2 tooling this is not ideal.
Not only because of the amount of information that needs to be shared, but also
it requires users of your API to be familiar with all these terms.

A side-effect of this is that API vendors that use OAuth2 will be more likely
roll their own SDKs, so they can insulate users from these implementation details.
It also creates a market for products like Nango and Passport.js.

Another result is that I see many people invent their own authentication flows
with JWT and refresh tokens from scratch, even though OAuth2 would be good fit.
Most people only need a small part of OAuth2, but to understand which small
part you need you’ll need to wade through and understand a dozen IETF RFC
documents, some of wich are still drafts.

Sidenote: OpenID Connect is another dimension on top of this. OpenID Connect builds on
OAuth2 and adds many features and another set of dense technical specs that are
(in my opinion) even harder to read.

OAuth2 as a framework is really good and very successful. But it’s not as good
at being a generic protocol that people can write generic code for.

Solving the setup issue

There’s a nice OAuth2 feature called “OAuth 2.0 Authorization Server Metadata”,
defined in RFC8414. This is a JSON document sitting at a predictable URL:
https://your-server/.well-known/oauth-authorization-server, and can tell
clients:

  • W

Truncated by Planet PHP, read more at the original (another 8947 bytes)

Create a production infrastructure for dockerized PHP Apps on GCP [Tutorial Part 10] – Pascal Landau

In the tenth part of this tutorial series on developing PHP on Docker we will
create a production infrastructure for a dockerized PHP application on GCP using multiple
VMs and managed services for redis and mysql.

What will you learn?
We’ll modify the setup introduced in the previous tutorial Deploy dockerized PHP Apps to production on GCP via docker compose as a POC and create an individual VM for each of our docker services. For the PHP application containers we’ll keep using Compute Instance VMs, and for mysql and redis we’ll use GCP managed products.

You’ll learn how to create the corresponding infrastructure on GCP via the UI as well as through the gcloud cli.

All code samples are publicly available in my
Docker PHP Tutorial repository on Github.
You find the branch with the final result of this tutorial at
part-10-create-production-infrastructure-php-app-gcp.

CAUTION: With this codebase it is not possible to deploy any longer! Please refer to the next
part
Deploy dockerized PHP Apps to production – using multiple VMS and managed mysql and redis instances from GCP
for the code to enable deployments again.

All published parts of the Docker PHP Tutorial are collected under a dedicated page at
Docker PHP Tutorial. The previous part was
Deploy dockerized PHP Apps to production on GCP via docker compose as a POC.

If you want to follow along, please subscribe to the RSS feed
or via email to get automatic notifications when the next part comes out 🙂



Table of contents



Introduction

In Deploy dockerized PHP Apps to production on GCP via docker compose as a POC
we’ve created a single Compute Instance VM, provisioned it with docker compose and
ran our full docker compose setup on it. In other words: All containers ran on the same VM
(that had to be reachable from the internet).

docker compose (POC) based infrastructure on GCP

Truncated by Planet PHP, read more at the original (another 48489 bytes)

Using GCP Redis Memorystore instances (create/connect/delete) – Pascal Landau

In this blog post I’ll summarize my experience with GCP Redis Memorystore instances. Memorystore
is the managed in-memory datastore solution from Google Cloud Platform and was mentioned in
Deploy dockerized PHP Apps to production on GCP via docker compose as a POC
as the “better” way to deal with in-memory datastores in a dockerized application (compared to
running an in-memory datastore via docker).

What will you learn?
I’ll explain the basic steps to create a fresh Redis instance, show different ways to connect to it (locally “from your laptop” via SSH tunnel and from a VM within GCP) and finally how to delete the instance. Every process is done through the Cloud Console UI and recorded as a short video as a visual aid. As in the GCP “primer” tutorial, this article ends with the commands to achieve the same things also via the gcloud CLI.



Table of contents



Setup Memorystore

GCP Cloud Console Memorystore UI

The managed solution for in-memory datastores from GCP is called
Memorystore and provides multiple datastore technologies –
including redis. In the Cloud Console UI it is
managed via the Memorystore UI that allows us to
create and manage instances.



Create a new redis instance

To get started, we need to enable the following APIs:

Creating a new instance from the
Create a redis instance UI
is pretty straight forward and well documented in the
GCP Redis Guide: Creating and managing Redis instances.

We’ll use the following settings:

  • Tier Selection: For testing purposes, I recommend choosing the “Basic” option (this will
    also disable the “Read Replicas”)
  • Capacity: Enter “1”
  • Set up connection > Network: Select the network that the VMs are located in – default in
    my case
  • Additional Conf

Truncated by Planet PHP, read more at the original (another 18097 bytes)

So, I had a heart attack and bypass surgery – Brian Moon

Hey y’all. So, I had a heart attack and bypass surgery. Here is the full story.

Friday (3/31) and Saturday (4/1) nights I had chest pain that I thought was acid reflux. On Sunday morning, the pain returned. I checked my blood pressure. It was 170/103. I took some anxiety meds, gas x, and aspirin. An hour later it was still high.

So we went to the ER. Blood work showed troponin in my blood. “When heart muscles become damaged, troponin is sent into the bloodstream. As heart damage increases, greater amounts of troponin are released in the blood.” I was then admitted for observation and further testing.

On Monday (4/3) morning they performed an echocardiogram. There were some abnormalities. They then performed a heart catheter. I had 90% blockage in at least one artery. And blockages in several others. I was immediately transferred to UAB hospital.

Later that day I met with a surgeon. After discussing it with him, we decided to do bypass surgery. The long term success rate with this surgery at my age is better than the alternatives. Surgery was booked for Thursday, April 6.

On Tuesday (4/4) and Wednesday (4/5) I just hung out at the hospital. I could have another heart attack at any moment. Going home was not an option. Friends and family visited. I had some hard conversations with family about what to do “just in case”. Those conversations don’t phase me. And the family I spoke to were very practical about it as well.

Early Thursday morning, before dawn, I broke down a little bit. The reality that I could not wake up was hitting me. I knew it was not likely. These procedures are done every day. My doctor would probably do several that day alone. Still, it could have happened. It’s normal for me to have these emotional outbursts alone. The first time I remember it happening was with my great grandmothers death when I was 15. It’s been the same with all my other grandparents’ deaths as well. It’s just how I deal with it.

Then it was time to go. The family that was there followed us down to the waiting room. Once I was in pre-op and settled they said I could have one person come back. They let two coke back. The nurse said we seemed like solid people. I don’t remember a lot about that time. I do remember Deedra deciding to read my chart. Haha. The staff walking by was confused. Then I was off. While still rolling me in, I started to feel woozy. And then black.

I wake up very confused with some voices I know and others I don’t know. I understand their instructions but don’t know how to follow them. I need to breath. Ok. There is something in my mouth. Oh it’s the ventilator of course. They can’t remove it until I breath. There are two people I know there. My ex wife, and mother of my six children, Robin, coaching me on what to do. Good I need that right now. And Amy, my platonic life partner, speaking to me softly and encouraging me to breath. Man, I really need that approach too. If you had asked me what two people I would want in that moment, I would probably not have chosen either of them. And yet, they were the perfect combination at that time. Some (I assume) nurse said “good job” and out came the ventilator. Based on when I knew I went into the OR and how long the surgery took, I would say this is around 2pm. People say they visited with me in recovery. I believe them. Still, I really don’t recall much until 5AM on Friday morning.

Friday was confusing. It’s like my mind was still trying to figure out what happened to my body.

Between Friday and Tuesday (4/11) I had good times and bad times. I eventually got to go home. That is where I am now. My strength has slowly been recovering. The last device attached to my body came off yesterday. It will be several weeks of very limited activity. Mostly I just can’t lift things or drive. Slowly that will be allowed more and more. Then once all restrictions are removed, I can start building up my strength again.

Using GCP MySQL Cloud SQL instances (create/connect/delete) – Pascal Landau

In this blog post I’ll summarize my experience with GCP MySQL Cloud SQL instances. Cloud SQL
is the managed relational database solution from Google Cloud Platform and was mentioned in
Deploy dockerized PHP Apps to production on GCP via docker compose as a POC
as the “better” way to deal with databases in a dockerized application (compared to running a
database via docker).

What will you learn?
I’ll explain the basic steps to create a fresh MySQL instance, show different ways to connect to it (Cloud Shell, locally “from your laptop” and from a VM within GCP) and finally how to delete the instance. Every process is done through the Cloud Console UI and recorded as a short video as a visual aid. As in the GCP “primer” tutorial, this article ends with the commands to achieve the same things also via the gcloud CLI.



Table of contents



Setup Cloud SQL

GCP Cloud Console Cloud SQL UI

The managed solution for relational databases from GCP is called
Cloud SQL and provides multiple database technologies –
including mysql. In the Cloud Console UI it is managed
via the SQL UI that allows us to create and manage
instances.



Create a new mysql instance

To get started, we need to enable the following APIs:

Creating a new instance from the
Create a MySQL instance UI
is pretty straight forward and well documented in the
GCP MySQL Guide: Create instances,
though there are some configuration options under “Customize your instance” that I want to mention:

  • Machine type > Machine type: For testing purposes, I recommend choosing a “Shared core”
    option here (e.g. 1 vCPU, 0.614 GB) to keep the costs to a minimum
  • Connections > Instance IP assignment: For now we’ll go with a “Pu

Truncated by Planet PHP, read more at the original (another 23533 bytes)