Does OAuth2 have a usability problem? (yes!) – Evert Pot

I read an interesting thread on Hackernews in response to a post:
“Why is OAuth still hard in 2023”. The post and comments bring up a lot
of real issues with OAuth. The article ends with a pitch to
use the author’s product Nango that advertises support
for supporting OAuth2 flows for 90+ APIs and justifying the existence
of the product.

We don’t need 90 browsers to open 90 websites, so why
is this the case with OAuth2? In a similar vain, the popular passport.js
project has 538(!) modules for authenticating with various services,
most of which likely use OAuth2. All of these are NPM packages.

Anyway, I’ve been wanting to write this article for a while. It’s not
a direct response to the Nango article, but it’s a similar take with
a different solution.

My perspective

I’ve been working on an OAuth2 server for a year years now, and last
year I released an open source OAuth2 client.

Since I released the client, I’ve gotten several new features and requests
that were all contributed by users of the library, a few of note are:

  • Allowing client_id and client_secret to be sent in request bodies
    instead of the Authorization header.
  • Allow ‘extra parameters’ to be sent with some OAuth2 flows. Many servers,
    including Auth0 require these.
  • Allow users to add their own HTTP headers, for the same reason.

What these have in common is that there’s a lot of different OAuth2 servers
that want things in a slightly different/specific way.

I kind of expected this. It wasn’t going to be enough to just implement
OAuth2. This library will only work once people start trying it with different
servers and run into mild incompatibilities that this library will have to
add workarounds for.

Although I think OAuth2 is pretty well defined, the full breadth of specs and
implementations makes it so that it’s not enough to (as an API developer) to just
tell your users: “We use OAuth2”.

For the typical case, you might have to tell them something like this:

  • We use OAuth2.
  • We use the authorization_code flow.
  • Your client_id is X.
  • Our ‘token endpoint’ is Y.
  • Our ‘authorization endpoint’ is Z.
  • We require PKCE.
  • Requests to the “token” endpoint require credentials to be sent in a body.
  • Any custom non-standard extensions.

To some extent this is by design. The OAuth2 spec calls itself: “The OAuth 2.0
Authorization Framework”. It’s not saying it is the protocol, but rather it’s
a set of really good building blocks to implement your own authentication.

But for users that want to use generic OAuth2 tooling this is not ideal.
Not only because of the amount of information that needs to be shared, but also
it requires users of your API to be familiar with all these terms.

A side-effect of this is that API vendors that use OAuth2 will be more likely
roll their own SDKs, so they can insulate users from these implementation details.
It also creates a market for products like Nango and Passport.js.

Another result is that I see many people invent their own authentication flows
with JWT and refresh tokens from scratch, even though OAuth2 would be good fit.
Most people only need a small part of OAuth2, but to understand which small
part you need you’ll need to wade through and understand a dozen IETF RFC
documents, some of wich are still drafts.

Sidenote: OpenID Connect is another dimension on top of this. OpenID Connect builds on
OAuth2 and adds many features and another set of dense technical specs that are
(in my opinion) even harder to read.

OAuth2 as a framework is really good and very successful. But it’s not as good
at being a generic protocol that people can write generic code for.

Solving the setup issue

There’s a nice OAuth2 feature called “OAuth 2.0 Authorization Server Metadata”,
defined in RFC8414. This is a JSON document sitting at a predictable URL:
https://your-server/.well-known/oauth-authorization-server, and can tell
clients:

  • W

Truncated by Planet PHP, read more at the original (another 8947 bytes)

Create a production infrastructure for dockerized PHP Apps on GCP [Tutorial Part 10] – Pascal Landau

In the tenth part of this tutorial series on developing PHP on Docker we will
create a production infrastructure for a dockerized PHP application on GCP using multiple
VMs and managed services for redis and mysql.

What will you learn?
We’ll modify the setup introduced in the previous tutorial Deploy dockerized PHP Apps to production on GCP via docker compose as a POC and create an individual VM for each of our docker services. For the PHP application containers we’ll keep using Compute Instance VMs, and for mysql and redis we’ll use GCP managed products.

You’ll learn how to create the corresponding infrastructure on GCP via the UI as well as through the gcloud cli.

All code samples are publicly available in my
Docker PHP Tutorial repository on Github.
You find the branch with the final result of this tutorial at
part-10-create-production-infrastructure-php-app-gcp.

CAUTION: With this codebase it is not possible to deploy any longer! Please refer to the next
part
Deploy dockerized PHP Apps to production – using multiple VMS and managed mysql and redis instances from GCP
for the code to enable deployments again.

All published parts of the Docker PHP Tutorial are collected under a dedicated page at
Docker PHP Tutorial. The previous part was
Deploy dockerized PHP Apps to production on GCP via docker compose as a POC.

If you want to follow along, please subscribe to the RSS feed
or via email to get automatic notifications when the next part comes out 🙂



Table of contents



Introduction

In Deploy dockerized PHP Apps to production on GCP via docker compose as a POC
we’ve created a single Compute Instance VM, provisioned it with docker compose and
ran our full docker compose setup on it. In other words: All containers ran on the same VM
(that had to be reachable from the internet).

docker compose (POC) based infrastructure on GCP

Truncated by Planet PHP, read more at the original (another 48489 bytes)

Using GCP Redis Memorystore instances (create/connect/delete) – Pascal Landau

In this blog post I’ll summarize my experience with GCP Redis Memorystore instances. Memorystore
is the managed in-memory datastore solution from Google Cloud Platform and was mentioned in
Deploy dockerized PHP Apps to production on GCP via docker compose as a POC
as the “better” way to deal with in-memory datastores in a dockerized application (compared to
running an in-memory datastore via docker).

What will you learn?
I’ll explain the basic steps to create a fresh Redis instance, show different ways to connect to it (locally “from your laptop” via SSH tunnel and from a VM within GCP) and finally how to delete the instance. Every process is done through the Cloud Console UI and recorded as a short video as a visual aid. As in the GCP “primer” tutorial, this article ends with the commands to achieve the same things also via the gcloud CLI.



Table of contents



Setup Memorystore

GCP Cloud Console Memorystore UI

The managed solution for in-memory datastores from GCP is called
Memorystore and provides multiple datastore technologies –
including redis. In the Cloud Console UI it is
managed via the Memorystore UI that allows us to
create and manage instances.



Create a new redis instance

To get started, we need to enable the following APIs:

Creating a new instance from the
Create a redis instance UI
is pretty straight forward and well documented in the
GCP Redis Guide: Creating and managing Redis instances.

We’ll use the following settings:

  • Tier Selection: For testing purposes, I recommend choosing the “Basic” option (this will
    also disable the “Read Replicas”)
  • Capacity: Enter “1”
  • Set up connection > Network: Select the network that the VMs are located in – default in
    my case
  • Additional Conf

Truncated by Planet PHP, read more at the original (another 18097 bytes)

So, I had a heart attack and bypass surgery – Brian Moon

Hey y’all. So, I had a heart attack and bypass surgery. Here is the full story.

Friday (3/31) and Saturday (4/1) nights I had chest pain that I thought was acid reflux. On Sunday morning, the pain returned. I checked my blood pressure. It was 170/103. I took some anxiety meds, gas x, and aspirin. An hour later it was still high.

So we went to the ER. Blood work showed troponin in my blood. “When heart muscles become damaged, troponin is sent into the bloodstream. As heart damage increases, greater amounts of troponin are released in the blood.” I was then admitted for observation and further testing.

On Monday (4/3) morning they performed an echocardiogram. There were some abnormalities. They then performed a heart catheter. I had 90% blockage in at least one artery. And blockages in several others. I was immediately transferred to UAB hospital.

Later that day I met with a surgeon. After discussing it with him, we decided to do bypass surgery. The long term success rate with this surgery at my age is better than the alternatives. Surgery was booked for Thursday, April 6.

On Tuesday (4/4) and Wednesday (4/5) I just hung out at the hospital. I could have another heart attack at any moment. Going home was not an option. Friends and family visited. I had some hard conversations with family about what to do “just in case”. Those conversations don’t phase me. And the family I spoke to were very practical about it as well.

Early Thursday morning, before dawn, I broke down a little bit. The reality that I could not wake up was hitting me. I knew it was not likely. These procedures are done every day. My doctor would probably do several that day alone. Still, it could have happened. It’s normal for me to have these emotional outbursts alone. The first time I remember it happening was with my great grandmothers death when I was 15. It’s been the same with all my other grandparents’ deaths as well. It’s just how I deal with it.

Then it was time to go. The family that was there followed us down to the waiting room. Once I was in pre-op and settled they said I could have one person come back. They let two coke back. The nurse said we seemed like solid people. I don’t remember a lot about that time. I do remember Deedra deciding to read my chart. Haha. The staff walking by was confused. Then I was off. While still rolling me in, I started to feel woozy. And then black.

I wake up very confused with some voices I know and others I don’t know. I understand their instructions but don’t know how to follow them. I need to breath. Ok. There is something in my mouth. Oh it’s the ventilator of course. They can’t remove it until I breath. There are two people I know there. My ex wife, and mother of my six children, Robin, coaching me on what to do. Good I need that right now. And Amy, my platonic life partner, speaking to me softly and encouraging me to breath. Man, I really need that approach too. If you had asked me what two people I would want in that moment, I would probably not have chosen either of them. And yet, they were the perfect combination at that time. Some (I assume) nurse said “good job” and out came the ventilator. Based on when I knew I went into the OR and how long the surgery took, I would say this is around 2pm. People say they visited with me in recovery. I believe them. Still, I really don’t recall much until 5AM on Friday morning.

Friday was confusing. It’s like my mind was still trying to figure out what happened to my body.

Between Friday and Tuesday (4/11) I had good times and bad times. I eventually got to go home. That is where I am now. My strength has slowly been recovering. The last device attached to my body came off yesterday. It will be several weeks of very limited activity. Mostly I just can’t lift things or drive. Slowly that will be allowed more and more. Then once all restrictions are removed, I can start building up my strength again.

Using GCP MySQL Cloud SQL instances (create/connect/delete) – Pascal Landau

In this blog post I’ll summarize my experience with GCP MySQL Cloud SQL instances. Cloud SQL
is the managed relational database solution from Google Cloud Platform and was mentioned in
Deploy dockerized PHP Apps to production on GCP via docker compose as a POC
as the “better” way to deal with databases in a dockerized application (compared to running a
database via docker).

What will you learn?
I’ll explain the basic steps to create a fresh MySQL instance, show different ways to connect to it (Cloud Shell, locally “from your laptop” and from a VM within GCP) and finally how to delete the instance. Every process is done through the Cloud Console UI and recorded as a short video as a visual aid. As in the GCP “primer” tutorial, this article ends with the commands to achieve the same things also via the gcloud CLI.



Table of contents



Setup Cloud SQL

GCP Cloud Console Cloud SQL UI

The managed solution for relational databases from GCP is called
Cloud SQL and provides multiple database technologies –
including mysql. In the Cloud Console UI it is managed
via the SQL UI that allows us to create and manage
instances.



Create a new mysql instance

To get started, we need to enable the following APIs:

Creating a new instance from the
Create a MySQL instance UI
is pretty straight forward and well documented in the
GCP MySQL Guide: Create instances,
though there are some configuration options under “Customize your instance” that I want to mention:

  • Machine type > Machine type: For testing purposes, I recommend choosing a “Shared core”
    option here (e.g. 1 vCPU, 0.614 GB) to keep the costs to a minimum
  • Connections > Instance IP assignment: For now we’ll go with a “Pu

Truncated by Planet PHP, read more at the original (another 23533 bytes)

Setting up Git Bash / MINGW / MSYS2 on Windows – Pascal Landau

In this article I’ll document my process for setting up Git Bash / MINGW /
MSYS2 on Windows
including some additional configuration (e.g. installing make and apply
some customizations via .bashrc).



Table of contents



Introduction

When I was learning git I started with the fantastic
Git for Windows package, that is maintained in the
git-for-windows/git Github repository and comes with
Git Bash, a shell that offers a
Unix-terminal like experience. It uses
MINGW and MSYS2 under the hood
and does not only provide git but also a bunch of other common Linux utilities like

bash
sed
awk
ls
cp
rm
...

I believe the main “shell” is actually powered by MINGW64 as
that’s what will be shown by default:

Git Bash / MINGW shell

Thus, I will refer to the tool as MINGW shell or Git Bash throughout this article.

I have been using MINGW for almost 10 years now, and it is still my go-to shell for Windows. I
could just never warm up to WSL, because the file sharing performance between WSL and native
Windows files was (is?) horrible – but that’s a different story.



How to install and update Git Bash / MINGW / MSYS2 via Git for Windows

You can find the latest Git for Windows installation package directly at the homepage of
https://gitforwindows.org/. Older releases can be found on
Github in the
Releases section of the git-for-windows/git repository

Follow the instructions in the
How to Install Git Bash on Windows article on git-tower.com
to get a guided tour through the setup process.

After the installation is finished, I usually create a desktop icon and assign the shortcut
CTRL + ALT + B (for “bash”) so that I can open a new shell session conveniently via keyboard.

Git Bash desktop icon and shortcut



Update MINGW

To update Git for Windows, you can simply run

git update-git-for-windows

See also the
Git for Windows FAQ under “How do I update Git for Windows upon new releases?”

Git for Windows comes with a tool to check for updates and offer to install them. Whether or not you enabled auto-updates during installation, you can manually run git update-git-for-windows.

Truncated by Planet PHP, read more at the original (another 19960 bytes)

Xdebug Update: March 2023 – Derick Rethans

Xdebug Update: March 2023

In this monthly update I explain what happened with Xdebug development in this past two months. These are normally published on the first Tuesday on or after the 5th of each month.

Patreon and GitHub supporters will get it earlier, around the first of each month.

You can become a patron or support me through GitHub Sponsors. I am currently 34% (7% less than two months ago) towards my $2,500 per month goal, which is set to allow continued maintenance of Xdebug.

If you are leading a team or company, then it is also possible to support Xdebug through a subscription.

In the last month, I spend 18 hours on Xdebug, with 22 hours funded. Sponsorships through GitHub sponsors have now also drastically declined. Unless this is reversed, I would find it hard to spend the effort in making sure Xdebug continues to be updated for newer PHP versions. It certainly makes me think hard as to where to put my dedication towards.

This is also why I have not been as diligent with these update reports and been as active in resolving issues and bugs.

Xdebug Videos

I have published one new video in the last two months:

I have continued writing scripts for videos about Xdebug 3.2’s features, and am also intending to make a video about “Running Xdebug in Production”, and the updated “xdebug.client_discovery_header” feature (from Xdebug 3.1).

Let me know what you’d like to see!

You can find all previous videos on my YouTube channel.