Skip to content

[RFC]: Extend stdlib's doctesting approach to C examples #189

@Om-A-osc

Description

@Om-A-osc

Full name

Om Anand

University status

Yes

University name

Indian Institute of Technology, Bhilai

University program

Bachelor of Technology in Computer Science and Engineering

Expected graduation

2027

Short biography

I am Om Anand, a pre-final year Computer Science undergraduate at the Indian Institute of Technology (IIT) Bhilai, with a strong interest in systems programming, open-source development, and building efficient, scalable software. In terms of practical experience, I have worked as a Flutter Intern at Techshots, where I developed and deployed a cross-platform mobile application with 10k+ downloads. My work involved implementing CI/CD pipelines, integrating push notifications, and ensuring production-level reliability through testing and monitoring. I have worked on several system-oriented and machine learning projects, including a customer churn prediction pipeline using Artificial Neural Networks. Currently, I am working on Accelerating Kernel K-means on GPUs through Sparse Linear Algebra, as part of a course on parallelization of programs. This project involves CUDA & C programming and leveraging NVIDIA GPU architectures to optimize clustering algorithms, giving me hands-on experience with parallel computing and performance optimization. I am proficient in C, C++, Python, and JavaScript/TypeScript, and have experience working with frameworks and tools such as Node.js, React, TensorFlow, Docker, and Git. Beyond technical work, I am also actively involved in the open-source community and serve as a mentor in OpenLake, the open-source contribution club at IIT Bhilai, where I help others get started with contributing to real-world projects. My strong foundation in programming, combined with my experience in building efficient systems and contributing to open source, aligns well with the goals of the stdlib project. I am highly motivated to contribute meaningfully and make a lasting impact through GSoC.

Timezone

Asia/Kolkata (UTC+05:30)

Contact details

email:omanand8132@gmail.com, email:oma@iitbhilai.ac.in , github:https://github.com/Om-A-osc

Platform

Linux

Editor

I use both VS Code and Vim depending on the task. For larger projects, I prefer VS Code because of its extensive ecosystem of extensions such as Markdown preview, Port forwarding, and Docker integration which significantly improve productivity. On the other hand, I use Vim for quick edits when working in the terminal, as it allows me to make small changes without leaving the command line.

Programming experience

I love building projects, and most of my work has been in JavaScript, TypeScript, and Node.js. I have also worked with C#, Dart, C, and C++, exploring a range of domains from backend systems to real-time applications and game development.

Real-Time Communication & WebRTC

AnonMate Backend - https://github.com/Om-A-osc/AnonMate-Backend
A Node.js powered backend service for AnonMate that handles WebRTC signaling, including RTC offer/answer negotiation and ICE candidate management, enabling seamless peer-to-peer connections.

AnonMate - https://github.com/Om-A-osc/AnonMate
A real-time browser-based video chatting platform leveraging modern web technologies. Built with React + TypeScript, Socket.IO for real-time messaging, and WebRTC for peer-to-peer video/audio streaming. Features include user matching, message synchronization, video/audio controls (mute, camera toggle, call termination), and responsive UI design.

Browser Extension Development

Fockus - https://github.com/Om-A-osc/FocKus
A Chrome productivity extension that walks the DOM tree to intelligently highlight, hide, and manage webpage elements. Enhances focus by filtering out distracting content and helping users concentrate on what matters.

Game Development

DragonWarrior - https://github.com/Om-A-osc/DragonWarrior
A game development project built with Visual Studio + C# / Unity.
Browser build ready to play - https://omanand857.itch.io/dragonwarrior

Automated Social Media Bot

An AI powered bot for managing Twitter pages. This application helps users automate and optimize their Twitter presence with real-time analytics and management tools.

JavaScript experience

I have been working with JavaScript for nearly three years, starting from my first year of university when I began learning web development. Since then, most of my projects have been built using JavaScript and TypeScript, and over time I’ve developed a strong preference for the language due to its flexibility and wide ecosystem.
My favorite feature of JavaScript is its asynchronous programming model, especially with async/await, which makes it much easier to handle complex workflows like API calls, real-time systems, and background tasks in a clean and readable way.
My least favorite aspect of JavaScript is its sometimes confusing and inconsistent behavior, particularly around type coercion and equality (== vs ===).

Node.js experience

I have built all my backends using Node.js, and all of the above-mentioned projects include Node.js-based backend systems. While contributing to stdlib, I also gained experience with N-API, learning how Node.js can interface with C code to create native addons (.node files), which strengthened my understanding of Node.js at a deeper level.

C/Fortran experience

C was the first programming language I learned, as it was part of my school curriculum. Currently, in my university, I am taking a course on parallelization of programs, which involves regular coding in C and CUDA. I have also completed an Operating Systems course where I implemented system calls in the MIT xv6 operating system using C. In addition, I have solved over 700 problems on LeetCode, using a mix of C and primarily C++. This has strengthened my interest in low-level and native languages.

You can find my coding profile here: https://leetcode.com/u/snail_code_857/

While I haven’t worked with Fortran yet, I am a quick learner and confident in picking up new languages as needed.

Interest in stdlib

I first learned about stdlib during an open-source event at my college where Gunj Joshi spoke about contributing to impactful open-source projects. While discussing different projects, he mentioned stdlib and its vision of bringing numerical and scientific computing to the web. Since I have always enjoyed mathematics and problem solving, the idea of contributing to a project focused on mathematical and statistical computing immediately caught my interest.

Modern web browsers have become incredibly powerful execution environments. Today they are capable of running complex applications, performing heavy computations, and supporting advanced developer tooling. In this context, building a comprehensive numerical and scientific computation library in JavaScript feels like a very natural and forward-looking idea. stdlib aims to make high-quality mathematical and statistical tools easily accessible in both the browser and Node.js ecosystem, which is something I find very exciting.

Another reason I was drawn to stdlib is the quality and engineering discipline behind the project. The emphasis on correctness, testing, documentation, and modular design makes it a very interesting codebase to study and contribute to. I also found the community to be extremely welcoming. Through discussions on Zulip and GitHub, I was able to learn a lot about the development workflow and the design philosophy behind the project.

More broadly, I strongly believe in the power of open source and the impact it can have when people collaborate to build tools that benefit everyone. Contributing to stdlib allows me to combine my interest in mathematics, software engineering, and open-source collaboration, which is why it is a project I am genuinely excited to work on through Google Summer of Code.

Version control

Yes

Contributions to stdlib

So far, I have submitted over 110 pull requests, including:

  • ~80 pull requests merged
  • ~30 pull requests currently under review

Examples of Merged Contributions

Examples of open PRs

Examples of Merged Contribution Correcting C Codeblocks

Participation in Community Discussions

In addition to submitting code contributions, I actively participate in technical discussions on the stdlib Zulip channels.

Some examples include:

stdlib showcase

An interactive web-based tool for visualizing and exploring probability distributions with real-time calculations. Powered by stdlib and built with React, this visualizer makes understanding probabilistic distributions intuitive and accessible.
source code - https://github.com/Om-A-osc/probability-visualizer
website - https://om-a-osc.github.io/probability-visualizer/

Goals

Title

Extend stdlib's doctesting approach to C examples

Abstract

The stdlib project currently provides a robust doctesting architecture for JavaScript examples. JavaScript documentation and source examples are automatically validated through multiple tools, including ESLint doctest rules, JSDoc example validation, and Markdown execution plugins. These tools ensure that examples embedded in source files and README documentation remain executable and synchronized with the actual implementation.

However, C examples within stdlib packages have historically not been part of the same enforced validation pipeline. While many packages contain C implementations and documentation snippets, the examples in C source files and README documentation are not consistently executed as part of contributor or CI workflows. As a result, numeric expectations in // returns annotations and documentation snippets can drift over time, particularly for floating-point heavy APIs.

To address this gap, this project proposes extending stdlib’s doctesting architecture so that C examples receive the same level of automated validation as JavaScript examples.

Proposed Work

1. Source-level C doctest execution

  • Parse Doxygen-style /** ... */ comment blocks using regular expression to extract @example sections and associated // returns annotations via extractExamples().
  • Generate test programs using an append-to-source approach: the doctest main() function would be appended to a temporary copy of the original source file, ensuring static functions remain accessible and avoiding multiple-definition linker errors.
  • Inject #line directives so that GCC reports compilation errors relative to the original source file lines.
  • Use interleaved printf generation so that each printf is emitted inline immediately after the corresponding // returns annotation, ensuring multi-annotation examples capture intermediate variable values correctly.
  • Auto-discover package header files via findPackageHeaders() so the generated doctest code has proper function declarations.
  • Hoist inline function definitions above main() and rename them with a __doctest_ prefix when they conflict with functions in the original source.
  • Compile and execute the generated programs using resolved dependency information and @stdlib/utils/library-manifest.
  • Parse tagged runtime outputs and validate them using existing shared doctest utilities such as @stdlib/_tools/doctest/compare-values and @stdlib/_tools/doctest/create-annotation-value.
  • Provide this functionality through a CLI (bin/cli) and corresponding Makefile targets (make doctest-c, make doctest-c-files).

2. Markdown README C example validation

  • Implement a Markdown execution pipeline using the @stdlib/_tools/remark/plugins/remark-run-c-examples plugin (proposed).
  • Traverse the Markdown AST to locate C code blocks within usage and example sections.
  • Convert usage snippets containing // returns annotations into temporary doctest programs for compilation and validation.
  • Automatically infer required stdlib dependencies by scanning #include <stdlib/...> headers and resolving them via package manifests.

3. Safe dependency resolution and temporary execution environment

  • Resolve include paths, libraries, and source dependencies through stdlib manifest metadata.
  • Create temporary augmented manifests when required dependencies are missing, ensuring real manifest.json files are never modified.
  • Execute generated programs in isolated temporary directories and clean up all temporary artifacts after validation.

4. Unified doctest validation

  • Parse runtime outputs tagged with doctest markers.
  • Reuse stdlib’s existing comparison utilities to maintain consistent numeric validation semantics across JavaScript and C examples.

Expected Outcome

By integrating these pipelines into stdlib’s existing Make and CI workflows, C documentation examples and source-level doctests will become automatically executable and continuously validated.

By the end of the GSoC period, the project will deliver:

  • A fully functional doctest/c/ CLI integrated into stdlib
  • A remark-run-c-examples plugin supporting all C README usage and examples blocks
  • Makefile targets:
    • make doctest-c
    • make markdown-examples-c
  • CI integration for changed-file doctesting
  • Support for all annotation types currently present in stdlib C codebase
  • Comprehensive test suites for both tools
  • Documentation for contributors

Architecture Overview

The proposed C doctesting framework will consist of two independent tools addressing different file types:

graph TB

    subgraph "Input Files"
        SRC["C Source Files<br/>(main.c)"]
        README["Markdown Files<br/>(README.md)"]
    end

    subgraph "C Source Doctesting"
        CDT["doctest/c/ tool"]
        EE["extract_examples.js<br/>Parse @example from Doxygen"]
        GTP["generate_test_program.js<br/>Wrap in main(), add printf"]
        RI["resolve_includes.js<br/>Resolve deps via manifest.json"]

        CDT --> EE --> GTP
        CDT --> RI
    end

    subgraph "Markdown C Doctesting"
        RC["remark-run-c-examples plugin"]
        CC["collectCCodeBlocks()<br/>Walk AST, classify blocks"]
        GU["generateUsageTestProgram()<br/>Generate test program for usage block with/without returns, example block"]
        RCD["resolveCodeDependencies()<br/>Enhanced dep resolution"]

        RC --> CC --> GU
        RC --> RCD
    end

    subgraph "Shared Utilities (from JS doctesting)"
        CV["compare-values"]
        CAV["create-annotation-value"]
    end

    subgraph "Compilation & Execution"
        GCC["gcc compile"]
        RUN["Execute binary"]
        PARSE["Parse __DOCTEST_RESULT__ stdout"]

        GCC --> RUN --> PARSE --> CV
    end

    subgraph "Dependency Resolution Core"
        LM["library-manifest"]
    end

    subgraph "Feedback"
        SUG["Suggest Correct Annotation"]
        PASS["Doctest Passed"]
        FAIL["Codeblock Run Failed"]
    end

    subgraph "Make Targets"
        MDC["make doctest-c"]
        MC["make markdown-examples-c"]
    end

    SRC --> CDT
    README --> RC

    GTP --> GCC
    RI --> GCC

    GU --> GCC
    RCD --> GCC

    RI --> LM
    RCD --> LM

    CDT -.-> MDC
    RC -.-> MC

    CV -->|Pass| PASS
    CV -->|Fail| CAV --> SUG

    GCC -->|Compile Error| FAIL
    RUN -->|Runtime Error / Timeout| FAIL

    style SRC fill:#659ad2,color:#000
    style README fill:#8bc34a,color:#000
    style CV fill:#ff9800,color:#000
    style CAV fill:#ff9800,color:#000
    style LM fill:#ff9800,color:#000
    style FAIL fill:#f44336,color:#ffffff
    style PASS fill:#4caf50,color:#ffffff
Loading

Markdown C Doctesting

Overview

Markdown files (README.md) in stdlib contain C API documentation within <section class="c"> blocks. These blocks have two subsections with distinct purposes:

  • <section class="usage"> Code snippets demonstrating function signatures and individual API calls, often containing // returns annotations
  • <section class="examples"> Complete, self-contained C programs (with int main()) showcasing realistic usage

The proposed remark-run-c-examples plugin will process both subsections using different strategies.

Example: C Section in README.md

A typical C section in a README looks like:

<section class="c">

<section class="usage">

### Usage

​```c
#include "stdlib/math/base/special/abs.h"
​```

#### stdlib_base_abs( x )

​```c
double y = stdlib_base_abs( -1.0 );
// returns 1.0
​```

</section>

<!-- /.usage -->

<section class="examples">

### Examples

​```c
#include "stdlib/math/base/special/abs.h"
#include <stdio.h>

int main( void ) {
    const double x[] = { -5.0, -3.14, 0.0, 3.14, 5.0 };
    int i;
    for ( i = 0; i < 5; i++ ) {
        printf( "abs(%lf) = %lf\n", x[i], stdlib_base_abs( x[i] ) );
    }
}
​```

</section>

<!-- /.examples -->

</section>

<!-- /.c -->

Block Collection and Classification

The plugin will walk the Markdown AST and classify C code blocks as follows:

graph LR
    A["Walk AST"] --> B{"Inside<br/>section.c?"}
    B -->|No| SKIP["Skip"]
    B -->|Yes| C{"Which subsection?"}
    C -->|"section.usage"| D{"First block?"}
    D -->|Yes| E["Cache as preamble<br/>(usagePrelude)"]
    D -->|No| F{"Has // returns?"}
    F -->|Yes| G["type: usage<br/>assertions: true<br/>Prepend preamble"]
    F -->|No| H["type: usage<br/>assertions: false<br/>Prepend preamble"]
    C -->|"section.examples"| I["type: example<br/>(full program)"]
Loading

Preamble caching: The first usage code block (which typically contains only #include directives) will be cached and prepended to all subsequent usage blocks. This would mirror the JavaScript remark plugin's behavior, where the first usage block (containing require() imports) is cached similarly.

Processing Steps by Block Type

Usage blocks with // returns annotations:

  1. Preamble + code are combined into a single source snippet
  2. generateUsageTestProgram() transforms the snippet:
    • Extracts #include directives → places them at the top
    • Identifies // returns <value> annotations
    • For each annotation, finds the preceding variable assignment (e.g., double y = func(x);)
    • Replaces the // returns line with printf("__DOCTEST_RESULT__:%.17g\n", (double)varName);
    • Wraps remaining code in int main( void ) { ... return EXIT_SUCCESS; }
  3. Dependency resolution via resolveCodeDependencies() (detailed in the Dependency Resolution section below)
  4. Compilation with gcc using resolved include dirs, source files, and libraries
  5. Execution of the compiled binary
  6. Validation: stdout would be parsed for __DOCTEST_RESULT__: prefixed lines. Each result would be compared against its corresponding annotation using the shared compareValues() utility

Usage blocks without // returns:

  1. Same preamble prepending and test program generation (wrapping in main())
  2. The generated program would be compiled and executed, checking only that compilation succeeds and the program runs without errors
  3. No annotation validation

Example blocks (full programs):

  1. These blocks would be compiled and executed directly as-is (they already contain int main())
  2. The proposed validation would check only for compilation/runtime errors, with no // returns validation

Test Program Generation Example

Given this usage code block:

#include "stdlib/math/base/special/abs.h"

double y = stdlib_base_abs( -1.0 );
// returns 1.0

y = stdlib_base_abs( 0.0 );
// returns 0.0

The generated test program would look like:

#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include "stdlib/math/base/special/abs.h"

int main( void ) {
    double y = stdlib_base_abs( -1.0 );
    printf( "__DOCTEST_RESULT__:%.17g\n", (double)y );
    y = stdlib_base_abs( 0.0 );
    printf( "__DOCTEST_RESULT__:%.17g\n", (double)y );
    return EXIT_SUCCESS;
}

The binary would output:

__DOCTEST_RESULT__:1
__DOCTEST_RESULT__:0

These values would then be parsed and compared against the annotations 1.0 and 0.0, respectively.


Source File C Doctesting

Overview

C source files (e.g., src/main.c) often contain Doxygen-style /** ... */ comments with @example tags. The proposed @stdlib/_tools/doctest/c/ Node.js tool would extract @example blocks, generate test programs, compile and run them, and validate // returns annotations. The tool would use an append-to-source compilation strategy, and it would expose a CLI, Makefile targets, and enhanced dependency resolution.

Example: @example in a C Source File

/**
* Computes the absolute value of a double-precision floating-point number.
*
* @param x    input value
* @returns    absolute value
*
* @example
* double y = stdlib_base_abs( -1.0 );
* // returns 1.0
*
* @example
* double y = stdlib_base_abs( 0.0 );
* // returns 0.0
*/
double stdlib_base_abs( const double x ) {
    if ( x < 0.0 ) {
        return -x;
    }
    return x;
}

Extracted Data Structures

extractExamples() will parse the Doxygen comment and produce a list of example objects. For the source file above, the output will be:

[
  {
    "line": 10,
    "code": "double y = stdlib_base_abs( -1.0 );\n// returns 1.0",
    "assertions": [
      { "expected": "1.0", "approximate": false, "lineOffset": 1 }
    ]
  },
  {
    "line": 14,
    "code": "double y = stdlib_base_abs( 0.0 );\n// returns 0.0",
    "assertions": [
      { "expected": "0.0", "approximate": false, "lineOffset": 1 }
    ]
  }
]

Under the proposed design, each example would map to one @example block, and each assertion would map to one // returns line within that block:

Field Source
line Line number of @example tag in the original source file
code All code lines between @example and the next tag (with * prefix stripped)
assertions[].expected The value string after // returns (e.g., "1.0", "~0.11", "NaN")
assertions[].approximate true if annotation starts with ~
assertions[].lineOffset Position of // returns relative to the @example tag

Test Program Generation and Append-to-Source Mode

The proposed tool would use an append-to-source compilation strategy. Instead of generating a standalone .c file, generateTestProgram() would produce a suffix that would be appended to a temporary copy of the original source file. This approach would ensure:

  1. static functions in the source would remain accessible to the doctest main().
  2. Header-defined symbols would be compiled in a single translation unit, avoiding multiple definition linker errors.
  3. #line directives would map GCC errors back to the original source file.

For the first @example block above, the temporary composite file that GCC would compile would look like:

/*  Original source file (prepended with #line directive) */

#line 1 "/path/to/src/main.c"
/**
* Computes the absolute value of a double-precision floating-point number.
*
* @param x    input value
* @returns    absolute value
*
* @example
* double y = stdlib_base_abs( -1.0 );
* // returns 1.0
*
* @example
* double y = stdlib_base_abs( 0.0 );
* // returns 0.0
*/
double stdlib_base_abs( const double x ) {
    if ( x < 0.0 ) {
        return -x;
    }
    return x;
}

/* Generated doctest suffix (appended)  */

#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include "stdlib/math/base/special/abs.h"

int main( void ) {
    #line 8 "/path/to/src/main.c"
    double y = stdlib_base_abs( -1.0 );
    printf( "__DOCTEST_RESULT__:%.17g\n", (double)y );
    return EXIT_SUCCESS;
}

In buildCompileCommand(), the original source file path in the gcc command would be replaced with the temporary composite file so that GCC would compile the augmented version. The #line directives would ensure that any compilation errors are reported against the original source file lines.

Binary output:

__DOCTEST_RESULT__:1

Validation: parseOutputValue("1") would yield 1.0, and compareValues(1.0, "1.0") would pass.

Pipeline

graph TB
    A["C source file"] -->|"extractExamples()"| B["List of example objects<br/>(code, assertions, line)"]
    B -->|"For each example"| C["generateTestProgram()<br/>+ source file #includes"]
    A -->|"resolveIncludes()"| D["manifest.json → includes,<br/>sources, libraries, libpath"]
    
    C --> E["Write .c → Compile (gcc) → Run binary"]
    D --> E
    
    E --> H["Parse __DOCTEST_RESULT__"]
    H --> I["compareValues()"]
    I -->|Match| J[" DOCTEST PASSED"]
    I -->|Mismatch| K["DOCTEST FAILED<br/>+ createAnnotationValue()"]
Loading

Processing Steps

  1. Read the source file and resolve its package root (directory containing package.json)
  2. extractExamples() — Parses the file for /** ... */ comment blocks:
    • Tracks entry into @example tags
    • Collects example code lines (stripping the leading * from comment lines)
    • Identifies // returns annotations and whether they're approximate (~ prefix)
    • Each example produces an object: { code, assertions: [{ expected, approximate, lineOffset }], line }
  3. resolveIncludes() — Uses the package's manifest.json with @stdlib/utils/library-manifest to resolve:
    • Include directories (-I flags)
    • Source files to compile alongside (transitive deps)
    • Libraries to link (-l flags)
    • Library paths (-L flags)
    • Uses task: 'examples' from manifest.json
    • Filters out N-API source files from transitive dependencies (they require Node.js headers not available during standalone gcc compilation)
    • Additionally, packages that are themselves N-API utilities (e.g., assert/napi/equal-typedarray-types, math/base/napi/unary) are excluded at the Makefile level via path-based filtering (-not -path '*/napi/*'), since their own headers #include <node_api.h> and cannot be compiled without Node.js build infrastructure
  4. generateTestProgram() — For each @example block:
    • Always includes <stdio.h>, <math.h>, <stdlib.h>
    • Extracts #include directives from the original source file
    • Strips // returns lines from the example code
    • For each // returns, finds the preceding variable assignment using regex:
      /^\s*(double|float|int|int8_t|...|bool|size_t|...)\s+(\w+)\s*=\s*(.+?)\s*;\s*$/
      
    • Adds printf("__DOCTEST_RESULT__:%.17g\n", (double)varName); for each matched variable
    • Wraps everything in int main( void ) { ... return EXIT_SUCCESS; }
  5. Compile with gcc (flags: -std=c99 -O3 -Wall by default)
  6. Run the binary, capture stdout
  7. Parse stdout for __DOCTEST_RESULT__: lines → parseOutputValue() converts strings to JS numbers (handles nan, inf, -inf)
  8. Compare each result against its annotation using compareValues()
  9. Report aggregated results: total, pass, fail, with detailed error messages

Shared Utilities: Reusing JS Infrastructure

A key design principle of the proposed C doctesting framework is maximizing reuse of the existing JavaScript doctesting infrastructure. The following utilities would be shared:

@stdlib/_tools/doctest/compare-values

The core comparison engine already handles the annotation patterns needed for this proposal:

Annotation Handling
5.0 Exact numeric comparison via epsdiff()
~6.28 Approximate equality using roundn() at matching decimal precision
NaN isnan(actual) check
Infinity, -Infinity Direct equality against PINF/NINF
1.0e-9 Scientific notation via parseFloat() + toPrecision()

For C doctesting, the subset of comparison logic used is primarily numeric (since C printf outputs numeric strings). The annotation value string from the C code (e.g., "~0.11") is passed directly to compareValues() along with the JS number parsed from stdout.

@stdlib/_tools/doctest/create-annotation-value

This utility serializes a JavaScript value back into annotation format. It would be used in error messages to show the actual value when a doctest fails. For example, if the expected annotation is ~0.11 but the actual value is 0.109847822..., the error message would show actual: ~0.11 (rounded appropriately).

@stdlib/utils/library-manifest

This utility recursively resolves C build dependencies from manifest.json files. Each stdlib C package declares its dependencies in a manifest.json:

{
    "confs": [{
        "task": "examples",
        "src": ["./src/main.c"],
        "include": ["./include"],
        "dependencies": [
            "@stdlib/math/base/special/floor"
        ]
    }]
}

The library-manifest utility can recursively walk all dependencies, collecting include directories, source files, libraries, and library paths needed for compilation.


Dependency Resolution

The Problem

C code blocks in Markdown and @example blocks may reference headers from packages that are not listed as dependencies in the package's manifest.json. Consider @stdlib/math/base/special/cround its usage block in README.md includes:

#include "stdlib/complex/float64/ctor.h"
#include "stdlib/complex/float64/real.h"
#include "stdlib/complex/float64/imag.h"

stdlib_complex128_t z = stdlib_complex128( -4.2, 5.5 );

stdlib_complex128_t out = stdlib_base_cround( z );

double re = stdlib_complex128_real( out );
// returns -4.0

double im = stdlib_complex128_imag( out );
// returns 6.0

But the examples task in cround/manifest.json only declares:

{
    "task": "examples",
    "dependencies": [
        "@stdlib/complex/float64/ctor",
        "@stdlib/complex/float64/reim",
        "@stdlib/math/base/special/round"
    ]
}

Missing from manifest: @stdlib/complex/float64/real and @stdlib/complex/float64/imag they are used in the usage block for demonstration but are not build dependencies. Without their include directories, gcc will fail with fatal error: stdlib/complex/float64/real.h: No such file or directory.

Proposed Solution: Enhanced Dependency Resolution

Both tools will implement an enhanced resolution pipeline that scans #include directives to discover missing dependencies:

graph TD
    A["Code block"] -->|"extractHeaderPackages()"| B["Inferred deps from #include<br/>e.g., @stdlib/complex/float64/real"]
    C["manifest.json"] -->|"collectManifestDependencies()"| D["Declared deps"]
    B --> E["missingDependencies()<br/>= inferred - declared"]
    D --> E
    E --> F["filterDependencies()<br/>Only keep resolvable stdlib pkgs"]
    F --> G["createTempManifest()<br/>Inject missing deps into temp copy"]
    G --> H["resolveManifest()<br/>Run library-manifest on augmented manifest"]
    H --> I["Full includes, sources, libs"]
Loading

Step-by-Step for cround

1. extractHeaderPackages(code) scans the usage block and infers:

["@stdlib/math/base/special/cround", "@stdlib/complex/float64/ctor",
 "@stdlib/complex/float64/real", "@stdlib/complex/float64/imag"]

2. collectManifestDependencies() reads the examples task from manifest.json:

["@stdlib/complex/float64/ctor", "@stdlib/complex/float64/reim",
 "@stdlib/math/base/special/round"]

3. missingDependencies() computes inferred - declared:

["@stdlib/complex/float64/real", "@stdlib/complex/float64/imag"]

4. createTempManifest() creates a temporary copy of manifest.json with the missing deps injected into the examples configuration:

{
    "task": "examples",
    "dependencies": [
        "@stdlib/complex/float64/ctor",
        "@stdlib/complex/float64/reim",
        "@stdlib/math/base/special/round",
        "@stdlib/complex/float64/real",
        "@stdlib/complex/float64/imag"
    ]
}

5. resolveManifest() would run library-manifest on the augmented manifest so that gcc receives -I flags for all five packages and compilation can succeed. The temporary manifest would then be deleted after resolution.

Supported Annotation Types and Value Categories

The C doctesting framework will support validation of all // returns annotation patterns found across stdlib's C codebase. The following table shows every supported value category, its annotation syntax, how it is captured via printf, and how the comparison works.

Numeric Scalar Values

These are the most common annotations in C doctests. They cover all C numeric types.

C Type Example Annotation Printf Format Comparison
double // returns 1.0 printf("%.17g", (double)y) Exact via epsdiff()
double (negative) // returns -1.0 printf("%.17g", (double)y) Exact via epsdiff()
float // returns 1.0f printf("%.17g", (double)y) Strip f suffix, then exact compare
int / int64_t / etc. // returns 5 printf("%.17g", (double)y) Cast to double, exact compare
uint8_t / uint32_t / etc. // returns 0 printf("%.17g", (double)y) Cast to double, exact compare
size_t // returns 10 printf("%.17g", (double)y) Cast to double, exact compare

Recognized assignment types in regex: double, float, int, int8_t, int16_t, int32_t, int64_t, uint8_t, uint16_t, uint32_t, uint64_t, bool, size_t, unsigned, signed, long, short, char

// Example: integer annotation
int64_t N = stdlib_base_gcd( 48, 18 );
// returns 6

// Example: double annotation
double y = stdlib_base_abs( -3.14 );
// returns 3.14

// Example: float annotation
float y = stdlib_base_absf( -3.14f );
// returns 3.14f

Approximate Values

For functions where exact output depends on floating-point precision, approximate annotations match to the number of displayed decimal places.

Annotation Precision Comparison
// returns ~0.11 2 decimal places roundn(actual, -2) vs roundn(0.11, -2)
// returns ~4.3333 4 decimal places roundn(actual, -4) vs roundn(4.3333, -4)
// returns ~2.0817f 4 decimal places Strip f, then roundn() at matching precision
// Example: approximate double
double y = stdlib_base_dists_levy_pdf( 2.0, 0.0, 1.0 );
// returns ~0.11

// Example: approximate float
float y = stdlib_base_dists_levy_pdff( 2.0f, 0.0f, 1.0f );
// returns ~0.11f

Special Floating-Point Values

Annotation C Output parseOutputValue() compareValues()
// returns NaN nan or NaN NaN isnan(actual)
// returns Infinity inf or INF Infinity actual === PINF
// returns -Infinity -inf or -INF -Infinity actual === NINF
// Example: NaN
double y = stdlib_base_dists_levy_pdf( -1.0, 0.0, 1.0 );
// returns NaN

// Example: Infinity
double y = stdlib_base_dists_levy_pdf( 0.0, 0.0, 1.0 );
// returns Infinity

Signed Zero

Annotation C Output Handling
// returns 0.0 0 parseFloat("0")0
// returns +0.0 0 parseFloat("+0.0")0
// returns -0.0 -0 parseFloat("-0")-0
// returns +0.0f 0 Strip f, then compare
// returns -0.0f -0 Strip f, then compare
// Example: signed zero
double y = stdlib_base_copysign( 0.0, -1.0 );
// returns -0.0

Scientific Notation

Annotation Comparison
// returns 1.0e-9 parseFloat("1.0e-9") → exact compare
// returns ~1.0e-9 toPrecision() at matching significant digits
// returns 6.123233995736766e-17 Full precision comparison
// Example: scientific notation
double y = stdlib_base_sincos( 0.0 );
// returns 6.123233995736766e-17

Boolean Values

Boolean annotations map C's true/false (which are 1/0 as integers) to their annotation strings.

Annotation C Output (via printf) Handling
// returns true 1 Map: if annotation is "true" and actual is 1 → pass
// returns false 0 Map: if annotation is "false" and actual is 0 → pass
// Example: boolean
bool b = stdlib_base_is_nan( 0.0/0.0 );
// returns true

bool b = stdlib_base_is_nan( 3.14 );
// returns false

Variable Mutation via Pointers (// varName =>)

Many C functions return values through output pointers rather than return statements. The framework will support a mutation annotation syntax, mirroring JS doctesting's // x => [2, 1] pattern:

// Example: pointer-based output
stdlib_base_cpolarf( z, &cabsf, &cphasef );
// cabsf => ~5.83
// cphasef => ~0.54

This would generate printf("__DOCTEST_RESULT__:%.17g\n", (double)cabsf) and printf("__DOCTEST_RESULT__:%.17g\n", (double)cphasef) after the function call, allowing each output variable to be validated independently.

e.g., Prefix (Skip Annotation)

Annotations prefixed with e.g., indicate illustrative (not exact) values. The JS doctest system skips these, and the C framework will do the same:

double y = stdlib_base_random();
// e.g., returns ~0.7363

These annotations would be detected and excluded from validation; they would serve only as documentation hints.

Summary of Supported Annotation Patterns

The proposed doctest system will support a variety of annotation patterns commonly used in C examples to describe expected outputs. These annotations will be parsed and validated during doctest execution.

Pattern Example
Exact double // returns 1.0
Exact float // returns 1.0f
Integer // returns 5
Approximate double // returns ~0.11
Approximate float // returns ~0.11f
NaN // returns NaN
Infinity / -Infinity // returns Infinity
Signed zero // returns -0.0
Scientific notation // returns 1.0e-9
Boolean // returns true
Variable mutation // cabsf => ~5.83
Illustrative examples (skipped during validation) // e.g., returns ~0.73

The implementation will initially support the annotation patterns listed above. Based on further discussions with maintainers and evolving project needs, support for additional annotation formats or validation strategies can be incorporated into the doctest framework.


Doctest Skipping Mechanisms

In practice, not all C code examples are suitable for automated doctesting. Some examples may:

  • Contain placeholder code (e.g., ...)
  • Depend on external environments (e.g., N-API or platform-specific code)
  • Be intended for illustration rather than execution

To handle such cases gracefully, the proposed system introduces explicit skip mechanisms for both C source files and Markdown documentation. These mechanisms are designed to be consistent with existing JavaScript doctesting behavior in stdlib.

1. Source File: Block-Level Skipping

Within Doxygen @example blocks, a comment-based marker will allow skipping individual examples:

/**
* @example
* // doctest-disable
* double y = foo( 1.0 );
* // returns 2.0
*/

Behavior:

  • The doctest runner detects // doctest-disable
  • The corresponding example block is skipped during execution

This enables fine-grained control, allowing developers to exclude specific examples without affecting others in the same file.

2. Source File: File-Level Skipping

A file-level pragma will allow disabling doctesting for all examples in a source file:

/**
* @doctest-disable-file
*/

Behavior:

  • All @example blocks in the file are skipped
  • Useful for files that:
    • Contain unsupported constructs
    • Depend on external build systems (e.g., N-API)
    • Are not intended for standalone execution

3. Markdown: Block-Level Skipping

Markdown C doctesting will support block-level skipping using:

<!-- run-disable -->
CODEBLOCK

Some edge cases are anticipated, and I intend to coordinate with mentors during the community bonding period to determine appropriate handling strategies, adapting the proposed architecture as needed.


Make Integration

Existing Targets

The proposed C doctesting framework would integrate with stdlib's Makefile build system through two files:

tools/make/lib/examples/doctest_c.mk:

  • make doctest-c Would find all C source files containing @example and run the doctest/c/ CLI on each
  • make doctest-c SOURCES_FILTER=".*/math/base/special/abs/.*" Filter by path pattern
  • make doctest-c-files FILES='/foo/src/main.c /bar/src/main.c' Run on specific files

tools/make/lib/examples/markdown_c.mk:

  • make markdown-examples-c Would find all README.md files and run remark with the remark-run-c-examples plugin
  • make markdown-examples-c MARKDOWN_FILTER=".*/math/base/special/abs/.*" Filter by path
  • make markdown-examples-c-files FILES='/foo/README.md' Run on specific files

How Make Targets Work

graph LR
    subgraph "make doctest-c"
        DC1["find C source files"] --> DC2["grep for @example"]
        DC2 --> DC3["node doctest/c/bin/cli --compiler gcc"]
    end

    subgraph "make markdown-examples-c"
        MC1["find README.md files"] --> MC2["remark with remark-run-c-examples plugin"]
    end
Loading

Both targets would support:

  • SOURCES_FILTER / MARKDOWN_FILTER — regex path filtering
  • FILES — explicit file list mode (useful for CI on changed files only)
  • Sequential file processing with exit-on-first-failure (|| exit 1)

CI Integration

Proposed CI Workflow

The C doctesting should integrate with stdlib's existing CI pipeline to catch documentation regressions on every pull request.

  1. Changed-file detection: Use git diff --name-only to identify modified C source files and README.md files, then pass them to the -files Make targets
  2. Parallel execution: Source file doctests and Markdown doctests can run in parallel CI jobs
  3. Error reporting: Both tools produce detailed error messages including:
    • File path and line number
    • Expected vs actual values
    • The full annotation string
    • Compilation errors (if any)
  4. Exit codes: Both Make targets use || exit 1 to fail fast on first error, ensuring CI catches the failure

Pre-Commit Hook Integration

stdlib's Git pre-commit hook (tools/git/hooks/pre-commit) runs a suite of static analysis checks on every commit and generates a pre_commit_static_analysis_report that is appended to pull requests. Currently, JavaScript doctesting is enforced at commit time through ESLint: the stdlib/doctest rule is enabled in .eslintrc.examples.js, and the pre-commit hook runs ESLint on all changed examples/*.js files via the lint_javascript_examples task. This ensures that JS // returns annotations are validated before code is committed.

To maintain consistency between JavaScript and C doctesting, the C doctest tools will be integrated into the same pre-commit pipeline. Two new tasks will be added to the hook:

task: doctest_c_src
status: passed

task: doctest_c_markdown
status: passed

The flow for each task:

  1. doctest_c_src Filters changed files for C source files (*/src/*.c), excluding N-API packages (*/napi/*). If any changed source files contain @example blocks, runs doctest/c/ CLI on those files. Reports passed, failed, or na (if no relevant files changed).

  2. doctest_c_markdown Filters changed files for README.md files. If any changed READMEs contain <section class="c"> blocks, runs remark with remark-run-c-examples on those files. Reports passed, failed, or na.

This ensures that C documentation examples are validated with the same rigor as JavaScript examples catching annotation drift at commit time rather than waiting for CI.

graph TD
    A["PR Submitted"] --> B["CI Pipeline"]
    B --> C["Detect Changed Files<br/>(git diff)"]
    C --> D{"Changed file types?"}
    D -->|"*.c source files"| E["make doctest-c-files<br/>FILES='changed .c files'"]
    D -->|"README.md"| F["make markdown-examples-c-files<br/>FILES='changed READMEs'"]
    E --> G["Report Results"]
    F --> G
    G -->|"All passed"| H[" CI Pass"]
    G -->|"Any failed"| I[" CI Fail<br/>with detailed error"]
Loading

Full Suite Runs

  • make doctest-c Runs all C source file doctests
  • make markdown-examples-c Runs all Markdown C examples

These full runs help catch issues that might not surface in changed-file-only runs (e.g., transitive dependency changes affecting downstream packages).

Preliminary Work and Validation

To validate the feasibility of the proposed approach, I implemented several standalone scripts that prototype key parts of the C doctesting workflow. These experiments include extracting example code, generating test programs, and compiling and executing them in an isolated environment with dependency resolution.

Through this process, I was able to verify that the core ideas behind the proposed architecture including annotation parsing, test program generation, and output validation work reliably in practice.

Using these prototypes, I was also able to identify real inconsistencies in the stdlib codebase, such as incorrect // returns annotations and C code blocks that failed to compile or execute correctly. I raised multiple pull requests to fix these issues, which were successfully merged (some examples):

As part of the proposed project, once the doctesting framework is integrated, we can also introduce a dedicated tracking issue to systematically identify and fix such inconsistencies across the repository.

complete repo C source file doctest error log
https://drive.google.com/file/d/1wuXgV06lRakNuuBvjZuftk1MfFFLVn4R/view

draft pr
stdlib-js/stdlib#11106

Potential Challenges

1. Task-specific configuration gaps in manifest.json

The stdlib build system resolves C dependencies through @stdlib/utils/library-manifest, which requires an exact task match (e.g., "task": "examples") when resolving transitive dependencies. During investigation of C benchmark build failures, it was discovered that some packages only define build configurations in their manifest.json and lack examples or benchmark task entries. When the doctest resolver requests dependencies under task: "examples", these missing configurations cause resolution failures even though the same headers and source files are needed.

This issue was discussed with Athan Reines during office hours meetings and on GitHub (PR #10235). The agreed-upon solution is to add a base configuration to all manifest.json files across the repository, which would serve as a fallback when a task-specific configuration is not present. This will be addressed by raising a tracking issue, after which the necessary manifest.json updates can be made repository-wide. Once resolved, the C doctesting tools will be able to reliably resolve dependencies for all packages without task-matching failures.

2. Extending annotation support to complex data types

The current proposal focuses on scalar numeric values, booleans, special floating-point values, and pointer-based mutation which covers all of // returns annotations found in stdlib's C documentation today. This scope was determined after consulting with Athan during office hours meetings and reviewing existing annotation patterns across the codebase.

Extending validation to more complex data types such as arrays, ndarrays, or struct outputs would require significant changes to both the test program generation (e.g., printing array elements) and the comparison logic (e.g., element-wise validation). However, the current convention for writing C doctests in stdlib does not involve array or ndarray return annotations these are typically demonstrated through printf loops in example blocks rather than // returns annotations. Once the basic architecture proposed here is in place, it can be extended to support additional annotation formats if the need arises, but the current system will be more than sufficient for validating all existing C README usage blocks and source file @example sections.

3. Skipping Node-API related examples

Based on the Zulip discussion, the current proposal skips @example blocks in Node-API source files (e.g., @stdlib/assert/napi/*, @stdlib/napi/*) because they depend on Node-API types and functions (such as napi_env, napi_value, and napi_status) which require Node.js headers and a runtime environment, while the doctesting framework compiles examples as standalone C programs using only the system compiler and stdlib headers. Support could be added in the future by detecting Node-API usage via #include <node_api.h> or napi_ types, compiling examples as Node.js native addons linked against Node-API headers, and executing them through a small JavaScript driver that loads the generated .node module and captures results, leveraging the modular pipeline without major architectural changes.

Why this project?

I’m particularly excited about this project because C is one of the languages I have the most experience with, and I enjoy working with low-level systems. I also have a good understanding of stdlib’s current C build system for native code, which makes me comfortable contributing in this area.
Additionally, I have taken a Compiler Design course at university where I worked on parsing text into tokens and using context-free grammars to build a simple compiler. This experience with parsing and language tooling strongly motivates me to work on a project like C doctesting, as it closely relates to those concepts.

Compiler Design Course Transcript - https://drive.google.com/file/d/1eyRzpDpx2ldLuSI9IHzoYN3Q_3lzfZjW/view?usp=sharing

Qualifications

Familiarity with stdlib Build and Tooling Infrastructure

Through my prior contributions to the stdlib repository, I have developed familiarity with several components of the project's build system and repository tooling. Working on these issues required understanding how dependency resolution, repository linting, and CI validation operate across the multi-package stdlib codebase.

Dependency Resolution via @stdlib/utils/library-manifest

While investigating issues related to C benchmark builds, I analyzed how the stdlib build system resolves dependencies using @stdlib/utils/library-manifest. During this process, I identified a failure scenario that occurred when compiling benchmarks for a dependent package (blas/base/zgemv PR #10237 ). The build failed because a dependency (complex/float64/base/assert/is_equal) did not define a configuration for the benchmark task in its manifest.json.

Since the resolver requires an exact task match when resolving dependencies, the absence of a benchmark configuration prevented the dependency’s include directories from being propagated during benchmark builds. As a result, required headers were not available to the compiler.

I discussed this issue and potential solutions with Athan Reines, and the investigation helped clarify how task-specific configurations affect dependency resolution in the stdlib build system. The discussion is available here:

The insights gained from this investigation informed the dependency-resolution strategy proposed in this project.

Repository Linting and package.json Validation

The stdlib repository enforces strict structural validation of package.json files through automated CI lint checks. While examining repository metadata, I scanned all package.json files in the codebase and compared their top-level keys with the reference specification maintained in lib/node_modules/@stdlib/_tools/package-json/standardize/lib/keys.json.

This analysis revealed that the private key was being used in several packages but was missing from the reference specification. After discussing the issue with Athan on Zulip, it was confirmed that the key is intentionally used and should be included in the reference to prevent CI lint failures.

I submitted a pull request updating the reference so that the linting system correctly recognizes the private field as a valid top-level key:

Summary

Through these contributions, I have become familiar with the stdlib package structure, the task-based dependency resolution system used by library-manifest, and the repository’s automated linting and CI validation workflows. This familiarity will help ensure that the proposed C doctesting framework integrates cleanly with stdlib’s existing tooling and build infrastructure.

Prior art

I will aim to keep the C doctesting architecture as close as possible to the existing JavaScript doctesting system by reusing components and maintaining consistency. Beyond this, there is limited prior work available, as C doctesting is quite specific to stdlib and not widely explored elsewhere.

Commitment

I do not have any major commitments during the GSoC period apart from this program. I will be on my summer break, which means I will not have any academic obligations during this time.

I can comfortably dedicate around 30–35 hours per week to the project (350 hrs) and am willing to increase this if required to meet project goals. I will be consistently available for communication via email, Zulip, and other online platforms to ensure smooth collaboration with mentors.

Schedule

Community Bonding Period

  • Discuss edge cases, design decisions, and annotation syntax with mentors
  • Iterate on the architecture in collaboration with mentors and finalize the design

Week 1–2: Compile and Run Setup

Establish the foundational infrastructure for compiling and executing C code from Node.js:

  • Implement temporary directory management: create temp dirs for compilation artifacts, write generated .c sources, clean up after execution
  • Implement buildCompileCommand() to assemble gcc commands from compiler flags, include directories, source files, libraries, and library paths
  • Implement the compile-and-execute pipeline: write source → gcc compile via child_process.exec → run binary → capture stdout/stderr
  • Implement basic manifest.json resolution using @stdlib/utils/library-manifest to resolve include dirs, transitive source files, and link libraries
  • Add N-API source file filtering (exclude sources requiring Node.js headers)
  • Test the compile+run pipeline end-to-end on simple packages like math/base/special/abs

Week 3–5: Remark Plugin (remark-run-c-examples)

With the compile+run infrastructure ready, build the Markdown doctesting plugin:

  • Implement the remark plugin skeleton: AST walker, <section class="c"> detection, block classification (usage vs example)
  • Implement preamble caching (cache first usage block, prepend to subsequent blocks)
  • Implement compileAndRun() for example blocks (full programs) compile and execute using the week 1–2 infrastructure
  • Implement generateUsageTestProgram() for usage blocks wrap snippets in main(), extract #include directives, compile and run
  • Implement // returns annotation detection with printf("__DOCTEST_RESULT__:...") instrumentation for result capture
  • Integrate compareValues() and createAnnotationValue() for result validation
  • Implement parseOutputValue() to handle NaN, Infinity, -Infinity, signed zero
  • Add boolean annotation support (true/false1/0 mapping)
  • Add f suffix stripping for single-precision annotations
  • Add e.g., prefix detection to skip illustrative annotations
  • Implement enhanced dependency resolution: extractHeaderPackages() to scan #include "stdlib/..." directives, detect example-only deps missing from manifest.json, augment manifest via createTempManifest(), and resolve via library-manifest
  • Write markdown_c.mk Makefile targets for make markdown-examples-c and make markdown-examples-c-files
  • Test against stats/base/dists/levy/pdf, math/base/special/abs, cround, and similar packages

Week 6–8: Source File Doctesting Tool (doctest/c/)

  • Implement extractExamples() to parse Doxygen /** ... */ comments for @example blocks
  • Implement generateTestProgram() to wrap examples in main() with printf instrumentation
  • Implement resolveIncludes() using manifest.json resolution and the enhanced #include-scanning dependency resolution developed in weeks 3–5
  • Integrate compareValues() and createAnnotationValue() for result validation
  • Implement all value category support: numeric, boolean, float suffix, special values, scientific notation
  • Implement CLI tool (bin/cli) with options for compiler, flags, files, and verbose output
  • Write doctest_c.mk Makefile targets for make doctest-c and make doctest-c-files
  • Test on math/base/special/abs, math/strided/special/dmskdeg2rad, and similar packages

Week 9: Variable Mutation Support

  • Design and implement the // varName => value annotation syntax for pointer-based outputs
  • Update generateUsageTestProgram() and generateTestProgram() to detect => annotations and emit printf for named variables
  • Test against packages like cpolarf where results go through output pointers
  • Ensure backward compatibility // returns continues to work unchanged

Week 10–12: Test Suites, CI Integration, and Documentation

  • Write comprehensive test suite for remark-run-c-examples plugin:
    • Markdown fixtures covering passing, failing, approximate values, special values, booleans, float suffix, scientific notation, preamble caching, configuration comments, mutation annotations, and e.g. prefix
    • Test compilation error handling and missing dependency scenarios
  • Write comprehensive test suite for doctest/c/ tool:
    • C source fixtures with @example blocks covering all annotation types
    • Test multiple examples per file, manifest resolution, and error handling
  • Integrate C doctesting targets into the CI pipeline with changed-file detection
  • Run make doctest-c and make markdown-examples-c across representative parts of the codebase
  • Write user-facing documentation including annotation reference and contributing guide updates

Final Week: Buffer and Final Polishing

  • Address mentor feedback, fix edge cases, and finalize documentation and overall stability.
  • Allocate buffer time to handle any unexpected issues or challenges that may arise.

Post-GSoC Plans

I plan to continue contributing to stdlib beyond the GSoC period. If the need arises, I will further extend and improve the C doctesting framework, building on the work done during the program.

Additionally, I intend to continue my contributions to the @stdlib/stats/ namespace, expanding functionality and improving existing modules.

Related issues

#96

Checklist

  • I have read and understood the Code of Conduct.
  • I have read and understood the application materials found in this repository.
  • I understand that plagiarism will not be tolerated, and I have authored this application in my own words.
  • I have read and understood the patch requirement which is necessary for my application to be considered for acceptance.
  • I have read and understood the stdlib showcase requirement which is necessary for my application to be considered for acceptance.
  • The issue name begins with [RFC]: and succinctly describes your proposal.
  • I understand that, in order to apply to be a GSoC contributor, I must submit my final application to https://summerofcode.withgoogle.com/ before the submission deadline.

Metadata

Metadata

Assignees

No one assigned

    Labels

    20262026 GSoC proposal.received feedbackA proposal which has received feedback.rfcProject proposal.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions