Skip to content

Commit edb629f

Browse files
committed
add mermaid diagram + change links
1 parent 6a45988 commit edb629f

1 file changed

Lines changed: 82 additions & 32 deletions

File tree

README.md

Lines changed: 82 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,12 @@ A unified interface for optimization algorithms and experiments in Python.
1515
</h3>
1616

1717
<p align="center">
18-
<a href="https://github.com/SimonBlanke/Hyperactive/actions"><img src="https://img.shields.io/github/actions/workflow/status/SimonBlanke/Hyperactive/test.yml?style=flat-square&label=tests" alt="Tests"></a>
19-
<a href="https://codecov.io/gh/SimonBlanke/Hyperactive"><img src="https://img.shields.io/codecov/c/github/SimonBlanke/Hyperactive?style=flat-square" alt="Coverage"></a>
18+
<a href="https://github.com/SimonBlanke/Hyperactive/actions"><img src="https://img.shields.io/github/actions/workflow/status/SimonBlanke/Hyperactive/test.yml?style=for-the-badge&logo=githubactions&logoColor=white&label=tests" alt="Tests"></a>
19+
<a href="https://codecov.io/gh/SimonBlanke/Hyperactive"><img src="https://img.shields.io/codecov/c/github/SimonBlanke/Hyperactive?style=for-the-badge&logo=codecov&logoColor=white" alt="Coverage"></a>
2020
</p>
2121

22+
<br>
23+
2224
<table align="center">
2325
<tr>
2426
<td align="right"><b>Documentation</b></td>
@@ -57,10 +59,9 @@ Designed for hyperparameter tuning, model selection, and black-box optimization.
5759
<p>
5860
<a href="https://www.linkedin.com/company/german-center-for-open-source-ai"><img src="https://img.shields.io/badge/LinkedIn-Follow-0A66C2?style=flat-square&logo=linkedin" alt="LinkedIn"></a>
5961
<a href="https://discord.gg/7uKdHfdcJG"><img src="https://img.shields.io/badge/Discord-Chat-5865F2?style=flat-square&logo=discord&logoColor=white" alt="Discord"></a>
60-
<a href="https://github.com/sponsors/SimonBlanke"><img src="https://img.shields.io/badge/Sponsor-EA4AAA?style=flat-square&logo=githubsponsors&logoColor=white" alt="Sponsor"></a>
6162
</p>
6263

63-
---
64+
<br>
6465

6566
## Installation
6667

@@ -84,7 +85,7 @@ pip install hyperactive[all_extras] # Everything including Optuna
8485

8586
</details>
8687

87-
---
88+
<br>
8889

8990
## Key Features
9091

@@ -119,7 +120,7 @@ pip install hyperactive[all_extras] # Everything including Optuna
119120
</tr>
120121
</table>
121122

122-
---
123+
<br>
123124

124125
## Quick Start
125126

@@ -154,24 +155,44 @@ print(f"Best params: {best_params}")
154155
Best params: {'x': 0.0, 'y': 0.0}
155156
```
156157

157-
---
158+
<br>
158159

159160
## Core Concepts
160161

161-
```
162-
EXPERIMENT-BASED ARCHITECTURE
163-
164-
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
165-
│ Optimizer │───>│ Search │───>│ Experiment │
166-
│ (Algorithm) │ │ Space │ │ (Objective) │
167-
└──────────────┘ └──────────────┘ └──────────────┘
168-
│ │ │
169-
│ │ │
170-
v v v
171-
┌────────────────────────────────────────────────────┐
172-
│ Best Parameters │
173-
│ optimizer.solve() -> best_params │
174-
└────────────────────────────────────────────────────┘
162+
Hyperactive separates **what** you optimize from **how** you optimize. Define your experiment (objective function) and search space once, then swap optimizers freely without changing your code. The unified interface abstracts away backend differences, letting you focus on your optimization problem.
163+
164+
```mermaid
165+
flowchart TB
166+
subgraph USER["Your Code"]
167+
direction LR
168+
F["def objective(params):<br/> return score"]
169+
SP["search_space = {<br/> 'x': np.arange(...),<br/> 'y': [1, 2, 3]<br/>}"]
170+
end
171+
172+
subgraph HYPER["Hyperactive"]
173+
direction TB
174+
OPT["Optimizer"]
175+
176+
subgraph BACKENDS["Backends"]
177+
GFO["GFO<br/>21 algorithms"]
178+
OPTUNA["Optuna<br/>8 algorithms"]
179+
SKL["sklearn<br/>2 algorithms"]
180+
MORE["...<br/>more to come"]
181+
end
182+
183+
OPT --> GFO
184+
OPT --> OPTUNA
185+
OPT --> SKL
186+
OPT --> MORE
187+
end
188+
189+
subgraph OUT["Output"]
190+
BEST["best_params"]
191+
end
192+
193+
F --> OPT
194+
SP --> OPT
195+
HYPER --> OUT
175196
```
176197

177198
**Optimizer**: Implements the search strategy (Hill Climbing, Bayesian, Particle Swarm, etc.).
@@ -182,7 +203,7 @@ Best params: {'x': 0.0, 'y': 0.0}
182203

183204
**Best Parameters**: The optimizer returns the parameters that maximize the objective.
184205

185-
---
206+
<br>
186207

187208
## Examples
188209

@@ -374,21 +395,49 @@ print(f"Best params: {tuned_forecaster.best_params_}")
374395
<br>
375396

376397
<details>
377-
<summary><b>PyTorch Hyperparameter Tuning</b></summary>
398+
<summary><b>PyTorch Neural Network Tuning</b></summary>
378399

379400
```python
380401
import numpy as np
402+
import torch
403+
import torch.nn as nn
404+
from torch.utils.data import DataLoader, TensorDataset
381405
from hyperactive.opt.gfo import BayesianOptimizer
382406

407+
# Example data
408+
X_train = torch.randn(1000, 10)
409+
y_train = torch.randint(0, 2, (1000,))
410+
383411
def train_model(params):
384-
# Your PyTorch model training here
385412
learning_rate = params["learning_rate"]
386413
batch_size = params["batch_size"]
387414
hidden_size = params["hidden_size"]
388415

389-
# ... training code ...
390-
# return validation_accuracy
391-
pass
416+
model = nn.Sequential(
417+
nn.Linear(10, hidden_size),
418+
nn.ReLU(),
419+
nn.Linear(hidden_size, 2),
420+
)
421+
422+
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
423+
criterion = nn.CrossEntropyLoss()
424+
loader = DataLoader(TensorDataset(X_train, y_train), batch_size=batch_size)
425+
426+
model.train()
427+
for epoch in range(10):
428+
for X_batch, y_batch in loader:
429+
optimizer.zero_grad()
430+
loss = criterion(model(X_batch), y_batch)
431+
loss.backward()
432+
optimizer.step()
433+
434+
# Return validation accuracy
435+
model.eval()
436+
with torch.no_grad():
437+
predictions = model(X_train).argmax(dim=1)
438+
accuracy = (predictions == y_train).float().mean().item()
439+
440+
return accuracy
392441

393442
search_space = {
394443
"learning_rate": np.logspace(-5, -1, 20),
@@ -406,7 +455,7 @@ best_params = optimizer.solve()
406455

407456
</details>
408457

409-
---
458+
<br>
410459

411460
## Ecosystem
412461

@@ -418,7 +467,8 @@ This library is part of a suite of optimization and machine learning tools. For
418467
| [Gradient-Free-Optimizers](https://github.com/SimonBlanke/Gradient-Free-Optimizers) | Core optimization algorithms for black-box function optimization |
419468
| [Surfaces](https://github.com/SimonBlanke/Surfaces) | Test functions and benchmark surfaces for optimization algorithm evaluation |
420469

421-
---
470+
471+
<br>
422472

423473
## Documentation
424474

@@ -429,7 +479,7 @@ This library is part of a suite of optimization and machine learning tools. For
429479
| [Examples](https://hyperactive.readthedocs.io/en/latest/examples.html) | Jupyter notebooks with use cases |
430480
| [FAQ](https://hyperactive.readthedocs.io/en/latest/faq.html) | Common questions and troubleshooting |
431481

432-
---
482+
<br>
433483

434484
## Contributing
435485

@@ -439,7 +489,7 @@ Contributions welcome! See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines.
439489
- **Feature requests**: [GitHub Discussions](https://github.com/SimonBlanke/Hyperactive/discussions)
440490
- **Questions**: [Discord](https://discord.gg/7uKdHfdcJG)
441491

442-
---
492+
<br>
443493

444494
## Citation
445495

@@ -454,7 +504,7 @@ If you use this software in your research, please cite:
454504
}
455505
```
456506

457-
---
507+
<br>
458508

459509
## License
460510

0 commit comments

Comments
 (0)