Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ webassembly: $(WEBASSEMBLY_MICROPYTHON)


check_unix: $(UNIX_MICROPYTHON)
$(UNIX_MICROPYTHON) tests/test_all.py test_iir,test_fft,test_arrayutils
$(UNIX_MICROPYTHON) tests/test_all.py test_iir,test_fft,test_arrayutils,test_linreg
# TODO: enable more modules

rp2: $(PORT_DIR)
Expand Down
78 changes: 78 additions & 0 deletions docs/getting_started_browser.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@

.. Places parent toc into the sidebar

:parenttoc: True

.. _getting_started_browser:

=========================
Getting started for browser
=========================

.. currentmodule:: emlearn-micropython

emlearn-micropython runs on most platforms that MicroPython does.
This includes running in a web browser, using the `Webassembly port <https://github.com/micropython/micropython/tree/master/ports/webassembly>`_ of MicroPython.
The browser integration is enabled by `PyScript <https://docs.pyscript.net/>`_.

Prerequisites
===========================

A web browser and a file editor.
An CPython 3.10+ installation is also recommended, to act as web server.


emlearn-micropython build for browser
==================================

We publish a pre-built MicroPython for each release.
This includes emlearn-micropython as external C modules.

There are two files needed, the `micropython.mjs` and `micropython.wasm`.
They can be downloaded from:

- https://raw.githubusercontent.com/emlearn/emlearn-micropython/refs/heads/gh-pages/builds/latest/ports/webassembly/micropython.mjs
- https://raw.githubusercontent.com/emlearn/emlearn-micropython/refs/heads/gh-pages/builds/latest/ports/webassembly/micropython.wasm

The ``latest`` version can be changed to a tag to have a specific version (``0.11.0`` or later).


Setup web page
==================================

Create an `index.html` page with the following contents:

.. literalinclude:: helloworld_browser/index.html
:language: html

Make sure that you have ``micropython.mjs`` and ``micropython.wasm`` in the same directory.

Try it out
========================

Start a HTTP server to serve the files

.. code-block:: console

python -m http.server

Open your browser at http://localhost:8000

The MicroPython code using ``emlearn_linreg`` from emlearn-micropython should automatically run when you load the page.
The webpage should show an output like like:

``Input: [10.0, 75.0], prediction: 16.96 C``


Serving from device
====================================

On a MicroPython device with networking (like ESP32),
it can serve the browser frontend to clients.

We recommend using the excellent `MicroDot web framework <https://microdot.readthedocs.io/en/latest/>`_.

To be offline compatible, download PyScript and the MicroPython build files to your PC, and copy it to the device. Then update the HTML to have the local paths. See `PyScript offline <https://docs.pyscript.net/2026.3.1/user-guide/offline/#getting-micropython>`_ documentation for more information.



47 changes: 47 additions & 0 deletions docs/helloworld_browser/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<title>emlearn-micropython in browser with PyScript</title>
<link rel="stylesheet" href="https://pyscript.net/releases/2026.3.1/core.css">
<script type="module" src="https://pyscript.net/releases/2026.3.1/core.js"></script>
</head>
<body>
<!-- This should be in a config file, using mpy-config for brevity. -->
<mpy-config>
interpreter = "/micropython.mjs"
</mpy-config>
<script type="mpy">
from pyscript import document

import emlearn_linreg
import array

# Predict temperature from (hour_of_day, humidity)
# y = 0.5*hour - 0.1*humidity + 15 + noise
X = array.array('f', [
8, 80, # 8am, 80% humidity
12, 60, # noon
16, 55, # 4pm
20, 70, # 8pm
0, 85, # midnight
])
y = array.array('f', [15.2, 18.4, 20.1, 16.8, 13.5])

model = emlearn_linreg.new(2, 0.01, 0.5, 0.0001)
emlearn_linreg.train(model, X, y, max_iterations=100, tolerance=1e-6)

mse = model.score_mse(X, y)
print(f"MSE: {mse:.4f}")

new_sample = array.array('f', [10, 75]) # 10am, 75% humidity
out = model.predict(new_sample)
print(f"Predicted temperature: {out:.1f} C")

document.body.append(f"Input: {list(new_sample)}, prediction: {out:.2f} C")


</script>
</body>
</html>
1 change: 1 addition & 0 deletions docs/user_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ User Guide

getting_started_host.rst
getting_started_device.rst
getting_started_browser.rst
support.rst
native_modules.rst
external_modules.rst
Expand Down
Binary file added examples/datasets/california/X_test.npy
Binary file not shown.
Binary file added examples/datasets/california/X_train.npy
Binary file not shown.
175 changes: 175 additions & 0 deletions examples/datasets/california/prepare.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@

#!/usr/bin/env python3
"""
Download and preprocess California housing dataset for MicroPython testing.
Saves scaled train/test splits as .npy files.
"""

import os
import time

import numpy as np
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler


def prepare_california_housing_data(data_dir, sample=None):
"""Download, preprocess and save California housing dataset."""

print("Downloading California housing dataset...")
# Load the dataset
housing = fetch_california_housing()
X, y = housing.data, housing.target

if sample is not None:
indices = np.random.choice(X.shape[0], size=sample, replace=False)
X = X[indices]
y = y[indices]

print(f"Dataset shape: X={X.shape}, y={y.shape}")
print(f"Features: {housing.feature_names}")
print(f"Target: median house value in hundreds of thousands of dollars")

# Split into train/test (80/20)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)

print(f"Train set: X={X_train.shape}, y={y_train.shape}")
print(f"Test set: X={X_test.shape}, y={y_test.shape}")

# Scale the features (standardization)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

print("\nScaling applied:")
print(f"Feature means: {scaler.mean_}")
print(f"Feature stds: {scaler.scale_}")

# Convert to float32 for MicroPython compatibility
X_train_scaled = X_train_scaled.astype(np.float32)
X_test_scaled = X_test_scaled.astype(np.float32)
y_train = y_train.astype(np.float32)
y_test = y_test.astype(np.float32)

# Save as .npy files
np.save(os.path.join(data_dir, 'X_train.npy'), X_train_scaled)
np.save(os.path.join(data_dir, 'X_test.npy'), X_test_scaled)
np.save(os.path.join(data_dir, 'y_train.npy'), y_train)
np.save(os.path.join(data_dir, 'y_test.npy'), y_test)

print("\nSaved files:")
print(f"X_train.npy: {X_train_scaled.shape} float32")
print(f"X_test.npy: {X_test_scaled.shape} float32")
print(f"y_train.npy: {y_train.shape} float32")
print(f"y_test.npy: {y_test.shape} float32")

# Print some statistics for verification
print("\nData statistics:")
print(f"X_train range: [{X_train_scaled.min():.3f}, {X_train_scaled.max():.3f}]")
print(f"y_train range: [{y_train.min():.3f}, {y_train.max():.3f}]")
print(f"y_train mean: {y_train.mean():.3f}")

return X_train_scaled, X_test_scaled, y_train, y_test



def load_data(data_dir):
"""Load the preprocessed California housing data."""
print("Loading data...")
X_train = np.load(os.path.join(data_dir, 'X_train.npy'))
X_test = np.load(os.path.join(data_dir, 'X_test.npy'))
y_train = np.load(os.path.join(data_dir, 'y_train.npy'))
y_test = np.load(os.path.join(data_dir, 'y_test.npy'))

print(f"Train set: X={X_train.shape}, y={y_train.shape}")
print(f"Test set: X={X_test.shape}, y={y_test.shape}")
print(f"Data types: X={X_train.dtype}, y={y_train.dtype}")

return X_train, X_test, y_train, y_test

def test_elasticnet_configurations(data_dir):
"""Test different ElasticNet configurations to find good baselines."""

X_train, X_test, y_train, y_test = load_data(data_dir)

# Test configurations: (alpha, l1_ratio, description)
configs = [
(0.0, 0.0, "No regularization (OLS)"),
(0.01, 0.0, "Ridge (alpha=0.01)"),
(0.01, 1.0, "LASSO (alpha=0.01)"),
(0.01, 0.5, "ElasticNet (alpha=0.01, l1_ratio=0.5)"),
(0.001, 0.5, "ElasticNet (alpha=0.001, l1_ratio=0.5)"),
(0.1, 0.5, "ElasticNet (alpha=0.1, l1_ratio=0.5)"),
]

print("\n" + "="*70)
print("ElasticNet Configuration Comparison")
print("="*70)
print(f"{'Configuration':<35} {'Train MSE':<12} {'Test MSE':<12} {'R²':<8} {'Time':<8}")
print("-"*70)

results = []

for alpha, l1_ratio, description in configs:
start_time = time.time()

# Create and train model
if alpha == 0.0:
# Use regular linear regression for no regularization
from sklearn.linear_model import LinearRegression
model = LinearRegression()
else:
model = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, max_iter=2000, random_state=42)

model.fit(X_train, y_train)

# Make predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)

# Calculate metrics
train_mse = mean_squared_error(y_train, y_train_pred)
test_mse = mean_squared_error(y_test, y_test_pred)
test_r2 = r2_score(y_test, y_test_pred)

elapsed_time = time.time() - start_time

print(f"{description:<35} {train_mse:<12.6f} {test_mse:<12.6f} {test_r2:<8.3f} {elapsed_time:<8.3f}")

results.append({
'config': description,
'alpha': alpha,
'l1_ratio': l1_ratio,
'train_mse': train_mse,
'test_mse': test_mse,
'r2': test_r2,
'time': elapsed_time,
'model': model
})

return results



def main():

here = os.path.dirname(__file__)
data_dir = here

# Prepare the data
prepare_california_housing_data(data_dir, sample=4000)

# Test different configurations
results = test_elasticnet_configurations(data_dir)

#print(results)


if __name__ == "__main__":
main()

Binary file added examples/datasets/california/y_test.npy
Binary file not shown.
Binary file added examples/datasets/california/y_train.npy
Binary file not shown.
Loading
Loading