SDKS-3794: Add sdk-utilities package#202
Conversation
🦋 Changeset detectedLatest commit: 0765858 The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
View your CI Pipeline Execution ↗ for commit 0765858.
☁️ Nx Cloud last updated this comment at |
|
Deployed 8d6d9f6 to https://ForgeRock.github.io/ping-javascript-sdk/pr-202/8d6d9f68f34571dd110afff720ec50ce6b8633f6 branch gh-pages in ForgeRock/ping-javascript-sdk |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #202 +/- ##
==========================================
+ Coverage 45.18% 48.09% +2.90%
==========================================
Files 33 33
Lines 1547 1547
Branches 186 154 -32
==========================================
+ Hits 699 744 +45
+ Misses 848 803 -45 🚀 New features to boost your workflow:
|
| ], | ||
| "commit": false, | ||
| "fixed": [["@forgerock/*"]], | ||
| "privatePackages": false, |
There was a problem hiding this comment.
We removed this because when true if a package is private: true, then changesets ignores it so you can't add a changeset to it.
For now, we don't want that. private packages do not get published anyways.
Consider also setting "private": true to prevent accidental publication.
from: https://docs.npmjs.com/cli/v11/configuring-npm/package-json
| @@ -0,0 +1,6 @@ | |||
| --- | |||
| '@forgerock/davinci-client': minor | |||
There was a problem hiding this comment.
because this affected both packages, we thought it made sense to include both here.
ryanbas21
left a comment
There was a problem hiding this comment.
Just comments / discussion items
| /** | ||
| * Specifies the type of OAuth flow to invoke. | ||
| */ | ||
| export enum ResponseType { |
There was a problem hiding this comment.
I think this is a good opportunity to refactor this to a union type.
type ResponseType = 'code' | 'token'
we may have to update any code that is using the enum as a ResponseType.token or .code to use the string literal. we should get type errors if this is the case.
There was a problem hiding this comment.
I was thinking of getting rid of this enum altogether. The authorize util only uses the 'code' type so having this type doesn't seem beneficial unless it is moved up to some shared type used by several utils.
| timeout?: number; | ||
| } | ||
|
|
||
| interface ConfigOptions { |
There was a problem hiding this comment.
This is my personal take, so @cerebrl may differ in his opinion.
I feel like Config is a separate package. This may expand beyond the scope of this PR so that is totally fine, i just want to jot my thoughts here.
config in my eyes would make sense as a stand alone package. it has types that are used very heavily by many other types. Creating its own stand alone package (especially with two different types, could help us organize imports, and avoid circular references within packages.
I also don't view Config's as utilities personally. So while this may expand beyond the scope of this PR i'm open to hearing thoughts on this.
My thoughts would be packages/configs (roughly) and we can export two types of configs, if we wanted or just one. those configs have types, which can have an export of /types for consistency, or even more granular per config type exports.
This way when we are importing across packages, we have a standalone location for config values and types.
| // middleware?: RequestMiddleware[]; | ||
| // realmPath?: string; | ||
| redirectUri?: string; | ||
| scope?: string; |
There was a problem hiding this comment.
On a separate topic, our Config type here is very overloaded. I say this because we are defining config for many different types of applications in one config. Oauth, central login, etc.
What I am about to say will differ from other platforms so it would require a team discussion.
I agree with the concept of a single Config type, however, it can be a union of possible configurations.
an OAuth config would require different optional properties than a purely central login configuration.
This config type does not represent that well because everything is optional, we arent enforcing required properties in config variations that are really required.
Now this isn't a high priority feature, but having this type breakdown could provide more intellisense to a developer when they are trying to write a config.
"When do I need x property" is something that may be confusing and I think a more well constructed type could be of benefit here.
This would change things that 'inherit' from ConfigOptions, which we would also narrow down further also but its worth noting this would have a ripple effect on other types and break them.
There was a problem hiding this comment.
I would agree that config can be its own package. Although I've only seen it used by davinci-client package and sdk-utilities package so far, it would seem beneficial to be able to split it up into various types since not all properties are always needed. If this is the approach we are going to take what do you think about renaming the type used here in the utilities package to AuthUrlConfigOptions?
There was a problem hiding this comment.
This would require a rethinking of the AuthUrlConfigOptions, so we know what config options it needs, I assume it would probably need mostly what I imagine would be an OAuth config type.
But theres probably a few things to consider if we were to take this approach.
There was a problem hiding this comment.
I've already commented out the properties that I don't think are necessary for this util. Lmk if you think it would be safer to keep all the config properties.
| * Helper class for generating verifier, challenge and state strings used for | ||
| * Proof Key for Code Exchange (PKCE). | ||
| */ | ||
| abstract class PKCE { |
There was a problem hiding this comment.
As part of this work, do we think we want to keep PKCE as a class?
There is nothing inherently bad about this class, i'm just curious. Also could be done in a new PR/ticket later on.
I don't have a strong preference either way.
There was a problem hiding this comment.
One rule that I don't think I explicitly communicated is that, especially in the lower-level utilities and effects modules, classes are not an allowed pattern. Utilities can only be side-effect free, stateless functions.
Let's be really critical about how these get implemented in the new SDK.
| } | ||
| ], | ||
| "nx": { | ||
| "addTypecheckTarget": false |
There was a problem hiding this comment.
curious about this, i'll have to look it up. its confusing to me that this is false but we do still run typecheck.
i'm almost curious if we should just remove it.
| "declarationMap": true, | ||
| "skipLibCheck": true | ||
| "skipLibCheck": true, | ||
| "baseUrl": "." |
There was a problem hiding this comment.
hmm, why did we add this back? Was this me? I think we don't need it.
There was a problem hiding this comment.
if it adds back automatically with nx then thats fine, just not sure about it
There was a problem hiding this comment.
We added it because I was getting a build error when importing utils into the davinci-client package
nx run @forgerock/davinci-client:build
Compiling TypeScript files for project "@forgerock/davinci-client"...
error TS5090: Non-relative paths are not allowed when 'baseUrl' is not set. Did you forget a leading './'?
There was a problem hiding this comment.
ah ok fair enough forgot that.
ryanbas21
left a comment
There was a problem hiding this comment.
Also can we add an empty CHANGELOG.md and link it https://github.com/ForgeRock/ping-javascript-sdk/blob/main/CHANGELOG.md to the root changelog.
cerebrl
left a comment
There was a problem hiding this comment.
Part one of my review :)
| { | ||
| "extends": "./tsconfig.json", | ||
| "compilerOptions": { | ||
| "moduleResolution": "Bundler", |
There was a problem hiding this comment.
@ryanbas21 I can't remember where we landed on this, but I think I remember that we wanted to avoid the use of "Bundler" here, yeah?
There was a problem hiding this comment.
We do, ultimately i think I have to pair with AJ on this aspect of the PR to see the issues and see if we can remove it.
There was a problem hiding this comment.
@ancheetah I tested the sdk-utilities package, we can move this to NodeNext module/moduleResolution it seems for the tsconfig.lib` file.
it seems that the davinci-client is still reliant on the SDK for the AsyncConfigOptions.
This would be something we would move in a Config package i'd imagine.
There was a problem hiding this comment.
@ryanbas21 I switched to nodenext but should imports still be from .js?
e.g. import { CustomPathConfig } from './url.utils.types.js';
There was a problem hiding this comment.
@ryanbas21 I switched to
nodenextbut should imports still be from.js? e.g.import { CustomPathConfig } from './url.utils.types.js';
Yes, this is a requirement for Node module resolution when using 16+
| * Returns the base URL including protocol, hostname and any non-standard port. | ||
| * The returned URL does not include a trailing slash. | ||
| */ | ||
| function getBaseUrl(url: URL): string { |
There was a problem hiding this comment.
I'm not sure about this function's value. I feel like this whole function could be replaced with this:
const baseUrl = new URL(fullUrl).origin;There was a problem hiding this comment.
Neat trick! One thing that I think URL.origin does differently than our utility is return a null string if the protocol/scheme is not one of the ones listed here. Whereas in our utility we always return the protocol regardless of what it is. However, this is probably an edge case and I'm still for using URL.origin. Just wondering if we need to do anything special to handle the null string case.
There was a problem hiding this comment.
Interesting point. I would think that since this baseUrl is exclusively used for constructing API calls to a server, the scheme would always be http or https. So, all the schemes listed there should be more than enough for our use.
@ryanbas21 Do you think I'm possibly missing something?
There was a problem hiding this comment.
I did not know that about URL.origin. I would agree, that I don't see other schemes really as an issue.
the [port](https://developer.mozilla.org/en-US/docs/Web/API/URL/port) is only included if it's not the default for the protocol.
Worth noting but I also don't see this as an issue.
There was a problem hiding this comment.
One thing to note here is that these utilities won't need to support the legacy SDK, so they don't need to be perfectly backwards compatible. We can see this as an opportunity to improve these utilities, or get rid of them entirely if they have a similar analogue that is simpler and native.
| return `${getBaseUrl(url)}${newPath}`; | ||
| } | ||
|
|
||
| function parseQuery(fullUrl: string): { [name: string]: string } { |
There was a problem hiding this comment.
I think this could be re-implemented with more native APIs:
function convertParamsToObject(params) {
const paramApi = new URLSearchParams(params);
return Object.fromEntries(paramApi);
}There was a problem hiding this comment.
This function and its tests were originally written to accept the full url. One test that seems to fail using the URLSearchParams API is passing in a url with a trailing ? but no params (e.g http://domain.com?). This should return an empty object but instead the API reads http://domain.com? as a param key.
I believe this is because it will strip the leading ? but not the trailing ?. We should also be cautious here because this API does not parse encoded urls so we may get unexpected results.
https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams#no_url_parsing
There was a problem hiding this comment.
export function convertParamsToObject(fullUrl: string) {
const params = new URL(fullUrl).searchParams;
const paramApi = new URLSearchParams(params);
return Object.fromEntries(paramApi);
}Would this make sense to do to keep the api the same then?
There was a problem hiding this comment.
Actually we don't need the second line because the first line already returns a URLSearchParms object. Converting both foo=b%20r and foo=b+r to an object correctly converts them to {foo: 'b r'}.
However, still some issue with space encoding going the other way. See comment below on convertObjectToParams.
There was a problem hiding this comment.
I'd like to keep the symmetry with this method and its inverse below. I say that because there might be instances when you don't have the full URL available. Plus, it's really easy to get the string of params through this native method:
const paramString = new URL(fullUrl).search;| return query; | ||
| } | ||
|
|
||
| function stringify(data: { [name: string]: string | undefined }): string { |
There was a problem hiding this comment.
Maybe re-implement this as well:
function convertObjectToParams(obj) {
const paramApi = new URLSearchParams(obj);
return paramApi.toString();
}There was a problem hiding this comment.
Couple things. We need to filter out undefineds from the object. Also the API seems to encode spaces as + instead of %20. Will that be an issue?
https://en.wikipedia.org/wiki/Percent-encoding#The_application/x-www-form-urlencoded_type
There was a problem hiding this comment.
I'm not sure if this will be a problem as most servers will likely be able to decode either form. I'd say we can try this method, and if we come across an issue, we can address it then.
| } | ||
| } | ||
|
|
||
| function resolve(baseUrl: string, path: string): string { |
There was a problem hiding this comment.
Like the baseUrl function above, I think this can be replaced with the following:
const fullUrl = URL.parse(path, base);| **/ | ||
|
|
||
| /** @hidden */ | ||
| function getRealmUrlPath(realmPath?: string): string { |
There was a problem hiding this comment.
I feel like we can move this into the URL utilities file, since it relates directly to manipulating a URL value.
| const names = (realmPath || '') | ||
| .split('/') | ||
| .map((x) => x.trim()) | ||
| .filter((x) => x !== ''); | ||
|
|
||
| // Ensure 'root' is the first realm | ||
| if (names[0] !== 'root') { | ||
| names.unshift('root'); | ||
| } | ||
|
|
||
| // Concatenate into a URL path | ||
| const urlPath = names.map((x) => `realms/${x}`).join('/'); |
There was a problem hiding this comment.
Let's re-evaluate this logic and see if we have more concise native methods we could use instead.
There was a problem hiding this comment.
Not sure what else we can use here except for one big reduce. Although I find the way it's written now easier to read.
There was a problem hiding this comment.
Oh, I wasn't commenting on just line 21, but lines 10-21. I didn't have time to research better ways of handling all of the logic here, but I was curious if there were more URL centric methods, like the other functions I commented on, that would reduce the complexity here.
There was a problem hiding this comment.
I was also referring to moving lines 10-21 into a reduce function. I couldn't find any native URL splitting methods that would be useful here. There is a URL.pathname method but we would still have to split that, clean up, and stitch back together the realm url. Additionally, to construct the url we would need a base URL which is not currently provided here.
There was a problem hiding this comment.
Ah, okay, sounds good then. Let's leave it as it is. Thanks for looking into it :)
There was a problem hiding this comment.
If we decide to completely remove some of these utilities, like baseUrl, stringify and such, because native methods exist, we could remove a lot of these tests as they become unnecessary.
There was a problem hiding this comment.
I've reduced these functions to use native methods and tests are still passing (although I did find a bug in one of the tests for resolve). I'll conclude that we can remove these utilities and their tests.
function getBaseUrl(url: URL): string {
return new URL(url).origin;
}
function resolve(baseUrl: string, path: string): string {
// If the base url is invalid or missing then return an empty string
return URL.parse(path, baseUrl)?.toString() ?? '';
}
// aka parseQuery
function convertParamsToObject(fullUrl: string): Record<string, string> {
const paramApi = new URL(fullUrl).searchParams;
return Object.fromEntries(paramApi.entries());
}
// aka stringify
function convertObjectToParams(obj: Record<string, string | undefined>): string {
// Filter out undefined and emtpy strings
const filteredObj: Record<string, string> = Object.fromEntries(
Object.entries(obj).filter(([, value]) => !!value?.trim()) as [string, string][],
);
const paramApi = new URLSearchParams(filteredObj);
return paramApi.toString();
}
| * Helper class for generating verifier, challenge and state strings used for | ||
| * Proof Key for Code Exchange (PKCE). | ||
| */ | ||
| abstract class PKCE { |
There was a problem hiding this comment.
One rule that I don't think I explicitly communicated is that, especially in the lower-level utilities and effects modules, classes are not an allowed pattern. Utilities can only be side-effect free, stateless functions.
Let's be really critical about how these get implemented in the new SDK.
There was a problem hiding this comment.
I think this whole utility could be reduced to this:
function createRandomString(num = 32) {
const random = crypto.getRandomValues(new Uint8Array(num));
return btoa(random.join('')).replace(/[^a-zA-Z0-9]+/, '');
}
function createState() {
return createRandomString(16);
}
function createVerifier() {
return createRandomString(32);
}
async function createChallenge(verifier) {
const uint8Array = new TextEncoder().encode(verifier);
const arrayBuffer = await crypto.subtle.digest('SHA-256', uint8Array);
const charCodeArray = Array.from(new Uint8Array(arrayBuffer));
return btoa(String.fromCharCode(...charCodeArray))
.replace(/\+/g, '-')
.replace(/\//g, '_')
.replace(/=/g, '');
}I say this because I always struggle to understand how all the original PKCE methods work holistically. Everything is abstracted "vertically". In other words, a function that calls a function that calls another function that calls yet another function ... you get lost trying to trace everything.
I also don't really know why all these functions are broken out as many are not actually used outside of this very class. So, the abstractions don't really do much other than create a cognitive overload on the reader.
If we collapse most of these methods into a single function and flatten out the flow, we get a much more readable sequence of statements. I did all I could to find more modern methods for everything this does, but the above is about as succinct as I can accomplish.
There was a problem hiding this comment.
I do think this is easier to read overall, however I don't like the use of btoa for a couple reasons. First, it is deprecated for Node. Although this PKCE util will most likely be used on the frontend where btoa still exists on the window object, it may be nice to extend compatibility to the backend as well. In Node.js you can use Buffer.toString('base64').
This function is only provided for compatibility with legacy web platform APIs and should never be used in new code, because they use strings to represent binary data and predate the introduction of typed arrays in JavaScript. For code running using Node.js APIs, converting between base64-encoded strings and binary data should be performed using Buffer.from(str, 'base64') and buf.toString('base64').
This makes it more clear that we are base 64 encoding the random string.
Also, the way the random string generator is written now I don't think produces the expected result. If you seed the generator with a 32 octet sequence, you should receive a random string at the end that is 43 characters long. However the current function will return a string that is 103 characters long. So something may be off with the btoa encoding. However, if we use the more modern Buffer.toString() then we do receive an output that is 43 characters long.
I also propose we improve the random number generator by defining the set of characters that the string can have. This adds some allowed non-alphanumeric characters to the code verifier as defined in the PKCE RFC. This may mean that the createState function will then have to disallow the use of these non-alphanumeric characters as well, depending on its requirements. So perhaps createRandomString can take a second input allowedChars.
function createRandomString(length) {
const chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~";
let result = "";
const randomArray = new Uint8Array(length);
crypto.getRandomValues(randomArray);
randomArray.forEach((number) => {
result += chars[number % chars.length];
});
return result;
}
https://sentry.io/answers/generate-random-string-characters-in-javascript/
There was a problem hiding this comment.
export function createRandomStringAJ(num = 32): string {
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~';
let result = '';
const randomArray = new Uint8Array(num);
crypto.getRandomValues(randomArray);
randomArray.forEach((number) => {
result += chars[number % chars.length];
});
return Buffer.from(result)
.toString('base64')
.replace(/\+/g, '-')
.replace(/\//g, '_')
.replace(/=+$/, '');
}
There was a problem hiding this comment.
@ancheetah where is window.atob deprecated?
https://developer.mozilla.org/en-US/docs/Web/API/Window/btoa#browser_compatibility
| redirectUri: string; | ||
| responseType: string; | ||
| scope: string; | ||
| function generateAndStoreAuthUrlValues(options: GenerateAndStoreAuthUrlValues) { |
There was a problem hiding this comment.
This can't be a utility as it would violate the "utilities are side-effect free, stateless functions". The fact that this interacts with sessionStorage would mean if would have to "live" within the Effects package.
There was a problem hiding this comment.
So, this file probably should be in the Utilities package, but really in the Effects package. So, we can either delay the migration of this from the DaVinci Client into here, or go ahead and create the Effects package to start adding these type of "effect-ful" functions there.
| }; | ||
|
|
||
| it('correctly determines base URL', () => { | ||
| const tests = [ |
There was a problem hiding this comment.
@ryanbas21 @cerebrl any reason why there are duplicate tests here for each url?
There was a problem hiding this comment.
I'm honestly not sure. These tests have been around for 6 years. But, if we toss this utility function it's testing with a more native method, then we can just delete a lot of these tests.
There was a problem hiding this comment.
Oh, after looking at this one more time, I think I see what you're talking about here. If you're referring to the fact that some items in this array are duplicates, like ['http://domain.com', 'http://domain.com'] and/or ['https://domain.com', 'https://domain.com'], if you look closely, the difference is the scheme, http versus https.
There was a problem hiding this comment.
Are you sure because lines 23 and 24 look exactly the same to me... regardless we can remove this test and opt for native methods
There was a problem hiding this comment.
I think it's the trailing slash removal he's testing? IDK, now I'm starting to think I'm missing something ... lol
| sessions: `json/${realmUrlPath}/sessions/`, | ||
| }; | ||
| if (customPaths && customPaths[endpoint]) { | ||
| // TypeScript is not correctly reading the condition above |
There was a problem hiding this comment.
I think this comment is not right anymore and we can remove the ts-ignore
|
Added commits for the following:
|
653b311 to
0765858
Compare
|
#246 covers this PR now. |
JIRA Ticket
https://pingidentity.atlassian.net/browse/SDKS-3794?atlOrigin=eyJpIjoiMjc4NWZhMGI3Y2U3NGZiM2IwZjlhYjFiYWFmYmJmOGYiLCJwIjoiaiJ9
Description
Adds sdk-utilities packages
Changeset included