Skip to content

documentsReadOnly implemented#10659

Open
ikusakov2 wants to merge 13 commits intodotansimha:masterfrom
ikusakov2:feature/documentsReadOnly
Open

documentsReadOnly implemented#10659
ikusakov2 wants to merge 13 commits intodotansimha:masterfrom
ikusakov2:feature/documentsReadOnly

Conversation

@ikusakov2
Copy link
Copy Markdown
Contributor

Description

documentsReadOnly implemented, for safe handling of fragment includes between packages in the monorepo

Related # (10658)[https://github.com//issues/10658]

Type of change

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

  • Unit testing

@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Mar 31, 2026

🦋 Changeset detected

Latest commit: efabaa5

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 6 packages
Name Type
@graphql-codegen/gql-tag-operations Minor
@graphql-codegen/visitor-plugin-common Minor
@graphql-codegen/typescript-operations Minor
@graphql-codegen/plugin-helpers Minor
@graphql-codegen/cli Minor
@graphql-codegen/client-preset Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@eddeee888 eddeee888 force-pushed the feature/documentsReadOnly branch from c88a311 to 3005df7 Compare April 3, 2026 09:14
throw error;
const readOnlyHash = JSON.stringify(readOnlyPointerMap);
const outputDocumentsReadOnly = await cache(
'documents',
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A readonly document is also just a document for hashing purposes.
This ensures there is only one document in the final merged array, because we use the hash to detect the file uniqueness


async loadDocuments(pointer: Types.OperationDocument[]): Promise<Types.DocumentFile[]> {
async loadDocuments(
pointer: UnnormalizedTypeDefPointer,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only caller of this function will be passing in UnnormalizedTypeDefPointer, and nothing else. May as well narrow the type to make it easier to code.

Comment on lines -505 to -515
function hashDocument(doc: Types.DocumentFile) {
if (doc.rawSDL) {
return hashContent(doc.rawSDL);
}
async function addHashToDocumentFiles(
documentFilesPromise: Promise<Source[]>,
type: 'standard' | 'read-only',
): Promise<Types.DocumentFile[]> {
function hashDocument(doc: Source) {
if (doc.rawSDL) {
return hashContent(doc.rawSDL);
}

if (doc.document) {
return hashContent(print(doc.document));
}
if (doc.document) {
return hashContent(print(doc.document));
}

return null;
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hashDocument is is only used by one function, so I moved it inline to avoid jumping around.

documentPointers: UnnormalizedTypeDefPointer | UnnormalizedTypeDefPointer[],
config: Types.Config,
): Promise<Types.DocumentFile[]> {
): Promise<Source[]> {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When this function finishes, we'd only get the base Source. Types.DocumentFile is what we get after processing the Source (adding hash and type)

);

return newDocuments.map((document, index) => ({
...documents[index],
Copy link
Copy Markdown
Collaborator

@eddeee888 eddeee888 Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Context for future me:

Previously we throw away hash when optimising the doc.
When we introduce type (read-only or standard), we need it for further processing, so we need to keep it like this.

Maybe we should not just add these custom Codegen meta directly to the node, since optimizeDocuments implementation may remove it in the future.

Maybe we can add a special key/value to reduce this risk e.g. __meta: { hash: 'abc', types: 'standard' } 🤔

Comment on lines +37 to +38
hash: string;
type: 'standard' | 'read-only';
Copy link
Copy Markdown
Collaborator

@eddeee888 eddeee888 Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Source is the default from parsing the docs.
DocumentFile is the "processed" source, with Codegen metadata.

We may want to group them to avoid edge cases mentioned in: https://github.com/dotansimha/graphql-code-generator/pull/10659/changes#r3032421307

- Flow documentsReadOnly to typescript-operations
- Set up documents-readonly dev-test
- Update dev-tests output
- Merge standard and read-only documents
- Fix Source vs Types.DocumentFile usage for type correctness
@eddeee888 eddeee888 force-pushed the feature/documentsReadOnly branch from 8f4ba67 to df25ba7 Compare April 3, 2026 10:55
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documents now have type being 'standard' | 'read-only'.
If we don't supply that, the plugin will generate nothing.

This utility helps by creating Types.DocumentFile with hash and type hardcoded. If we need to make the params partial with override in the future, we could,.

@eddeee888 eddeee888 force-pushed the feature/documentsReadOnly branch from 631abbd to 40f5ca8 Compare April 4, 2026 02:59
@eddeee888 eddeee888 force-pushed the feature/documentsReadOnly branch from 40f5ca8 to 819cf71 Compare April 4, 2026 02:59
Comment on lines +324 to +336
const populateDocumentPointerMap = (
allDocumentsDenormalizedPointers: Types.OperationDocument[],
): UnnormalizedTypeDefPointer => {
const pointer: UnnormalizedTypeDefPointer = {};
for (const denormalizedPtr of allDocumentsDenormalizedPointers) {
if (typeof denormalizedPtr === 'string') {
pointer[denormalizedPtr] = {};
} else if (typeof denormalizedPtr === 'object') {
Object.assign(pointer, denormalizedPtr);
}
}
return pointer;
};
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inlining populateDocumentPointerMap makes it a bit easier to read as we don't jump around the file. We could take this pure function and put it at the bottom of the file if needed.

export interface DocumentFile extends Source {
hash?: string;
hash: string;
type: 'standard' | 'read-only';
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 non-optional fields introduced. Any existing custom plugin or preset that constructs a DocumentFile manually will now have a TypeScript compile error. This is a breaking change, not a minor one (mentioned in the changeset)

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any existing custom plugin or preset that constructs a DocumentFile manually will now have a TypeScript compile error.

That's a good point, I can make this optional and treat no type as standard

processedFile[file.hash] = true;

return prev;
}, []);
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the reduce accumulator prev is never used — outputDocuments.push(file) writes to an external array. The return value of the entire reduce is discarded.
It works, by chance, but this is confusing. Maybe a plain for...of loop would be cleaner?

Additionally, if a document has hash: null (which hashDocument() can return when rawSDL and document are both absent), processedFile[null] will be "null" — meaning all null-hash documents get deduped down to 1, which may be wrong.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the reduce accumulator prev is never used — outputDocuments.push(file) writes to an external array. The return value of the entire reduce is discarded.
It works, by chance, but this is confusing. Maybe a plain for...of loop would be cleaner?

That's true! I think I was trying to reduce the result to outputDocuments, but it's accessible inside the loop anyways. I've made it for...of in 7ec7e06

Additionally, if a document has hash: null (which hashDocument() can return when rawSDL and document are both absent), processedFile[null] will be "null" — meaning all null-hash documents get deduped down to 1, which may be wrong.

Huh, that's true, the typecheck in this repo is not strict enough. Something I'll need to configure after the major release.

I wonder when null would happen though 🤔 When Codegen is parsing an empty file?

'@graphql-codegen/client-preset': minor
---

Add support for `documentsReadOnly`
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the last chance to consider other names, such as: externalDocuments, referenceDocuments,
documentsReadOnly has its clear benefits as well, so I don't mind keeping it.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pro of documentsReadOnly is that it looks related to documents
The con of documentsReadOnly is that it doesn't sound like English.

externalDocuments sounds better. I'm happy to go with that if you are 🙂

Copy link
Copy Markdown
Contributor Author

@ikusakov2 ikusakov2 Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for bringing this one up!
It's really hard to decide!

  • documentsReadOnly has clear benefits. It is self-descriptive and it appears near documents (for code completion) - which is quite valuable as well. But it is not a proper English.

  • externalDocuments: The feature is about documents that are loaded for reference but don't produce output. ReadOnly describes how the current output treats them, not a property of the documents themselves - which makes documentsReadOnly name slightly confusing. externalDocuments is a proper name - in terms of English and semantics, but it is not self-explanatory.

I guess a user of this new feature would have to go to the documentation in any case (either slightly confusing name or not informative enough name), so a better sounding name has a slight benefit, imho.

documentPointers: UnnormalizedTypeDefPointer | UnnormalizedTypeDefPointer[],
config: Types.Config,
): Promise<Types.DocumentFile[]> {
): Promise<Source[]> {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 2 new fields were added to Types.DocumentFile, not to Source, which makes is harder to see where the contract is established.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I don't fully understand this statement. However I've traced the CLI code to use the right type:

  • before adding metadata (hash, type), the documents are Source
  • after adding metadata the documents are DocumentFile

I feel this may not answer your concern. Could you please help me understand this a bit more?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was a bit worried that the Types.DocumentFile[] has 2 new properties (hash and type) and Source doesn't have them - which might mislead a user.
But it is correct from the polymorphic world perspective. So - we can leave it as is.

Comment on lines +401 to +413
const processedFile: Record<string, true> = {};
const mergedDocuments = [
...outputDocumentsStandard,
...outputDocumentsReadOnly,
];
for (const file of mergedDocuments) {
if (processedFile[file.hash]) {
continue;
}
});

outputDocuments = result.documents;
outputDocuments.push(file);
processedFile[file.hash] = true;
}
Copy link
Copy Markdown
Collaborator

@eddeee888 eddeee888 Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've considered forwarding documentsReadOnly as-is but hit a few annoying cases:

  • documents are processed a lot after the initial loading stage here. If we kept documentsReadOnly separate, we'd need to run the same functions over them
  • Plugins use positional param, which means documentsReadOnly need to be at the last position to avoid a breaking change. This works, but feels disconnected

For this reason, I feel it's easier if we merge them as early as possible, and run the same processing over both types, and plugins like typescript-operations can handle the filtering without having to access another param

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants