Skip to content

Commit 5fcc08e

Browse files
committed
Update documentation for several exciting features of GFMMNN
1 parent b7d2f2c commit 5fcc08e

3 files changed

Lines changed: 83 additions & 6 deletions

File tree

README.rst

Lines changed: 29 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -183,8 +183,9 @@ training samples or a subset of both training samples and features. Training sub
183183
can be formed by stratified random subsampling, resampling, or class-balanced random subsampling.
184184
The final predicted results of an ensemble model are an aggregation of predictions from all base learners
185185
based on a majority voting mechanism. An intersting characteristic of hyperbox-based models is resulting
186-
hyperboxes from all base learners can be merged to formulate a single model. This contributes to increasing
187-
the explainability of the estimator while still taking advantage of strong points of ensemble models.
186+
hyperboxes from all base learners or decision trees can be merged to formulate a single model. This
187+
contributes to increasing the explainability of the estimator while still taking advantage of strong points
188+
of ensemble models.
188189

189190
Multigranularity learning
190191
~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -194,6 +195,32 @@ abstraction. An attractive characteristic of these classifiers is that they can
194195
to other fuzzy min-max models at a low degree of granularity based on reusing the knowledge learned from lower levels
195196
of abstraction.
196197

198+
Learning from both labelled and unlabelled data
199+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
200+
One of the exciting features of learning algorithms for the general fuzzy min-max neural network is the capability of
201+
creating classification boundaries among known classes and clustering data and representing them as hyperboxes in the case that
202+
labels are not available. Unlabelled hyperboxes is then possible to be labelled on the basis of the evidence of next incoming
203+
input samples. As a result, the GFMMNN models have the ability to learn from the mixed labelled and unlabelled datasets in
204+
a native way.
205+
206+
Ability to directly process missing data
207+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
208+
Learning algorithms for the general fuzzy min-max neural network supported by the library may classify inputs with missing
209+
data directly without the need for replacing or imputing missing values as in other classifiers.
210+
211+
Continual learning of new classes in an incremental manner
212+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
213+
Incremental learning algorithms of hyperbox-based models in the **hyperbox-brain** library can grow and accommodate new
214+
classes of data without retraining the whole classifier. Incremental learning algorithms themselves can generate new hyperboxes
215+
to represent clusters of new data with potentially new class labels both in the middle of normal training procedure and in the
216+
operating time where training has been finished. This property is a key feature for smart life-long learning systems.
217+
218+
Data editing and pruning approaches
219+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
220+
By combining the repeated cross-validation methods provided by scikit-learn and hyperbox-based learning algorithms, evidence from
221+
training multiple models can be deployed for identifying which data points from the original training set or the hyperboxes from
222+
the generated multiple models should be retained and those that should be edited out or pruned before further processing.
223+
197224
Scikit-learn compatible estimators
198225
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
199226
The estimators in hyperbox-brain is compatible with the well-known scikit-learn toolbox.

docs/user/features.rst

Lines changed: 29 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,9 @@ training samples or a subset of both training samples and features. Training sub
2929
can be formed by stratified random subsampling, resampling, or class-balanced random subsampling.
3030
The final predicted results of an ensemble model are an aggregation of predictions from all base learners
3131
based on a majority voting mechanism. An intersting characteristic of hyperbox-based models is resulting
32-
hyperboxes from all base learners can be merged to formulate a single model. This contributes to increasing
33-
the explainability of the estimator while still taking advantage of strong points of ensemble models.
32+
hyperboxes from all base learners or decision trees can be merged to formulate a single model. This
33+
contributes to increasing the explainability of the estimator while still taking advantage of strong points
34+
of ensemble models.
3435

3536
Multigranularity learning
3637
~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -40,6 +41,32 @@ abstraction. An attractive characteristic of these classifiers is that they can
4041
to other fuzzy min-max models at a low degree of granularity based on reusing the knowledge learned from lower levels
4142
of abstraction.
4243

44+
Learning from both labelled and unlabelled data
45+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
46+
One of the exciting features of learning algorithms for the general fuzzy min-max neural network is the capability of
47+
creating classification boundaries among known classes and clustering data and representing them as hyperboxes in the case that
48+
labels are not available. Unlabelled hyperboxes is then possible to be labelled on the basis of the evidence of next incoming
49+
input samples. As a result, the GFMMNN models have the ability to learn from the mixed labelled and unlabelled datasets in
50+
a native way.
51+
52+
Ability to directly process missing data
53+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
54+
Learning algorithms for the general fuzzy min-max neural network supported by the library may classify inputs with missing
55+
data directly without the need for replacing or imputing missing values as in other classifiers.
56+
57+
Continual learning of new classes in an incremental manner
58+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
59+
Incremental learning algorithms of hyperbox-based models in the **hyperbox-brain** library can grow and accommodate new
60+
classes of data without retraining the whole classifier. Incremental learning algorithms themselves can generate new hyperboxes
61+
to represent clusters of new data with potentially new class labels both in the middle of normal training procedure and in the
62+
operating time where training has been finished. This property is a key feature for smart life-long learning systems.
63+
64+
Data editing and pruning approaches
65+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
66+
By combining the repeated cross-validation methods provided by scikit-learn and hyperbox-based learning algorithms, evidence from
67+
training multiple models can be deployed for identifying which data points from the original training set or the hyperboxes from
68+
the generated multiple models should be retained and those that should be edited out or pruned before further processing.
69+
4370
Scikit-learn compatible estimators
4471
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4572
The estimators in hyperbox-brain is compatible with the well-known scikit-learn toolbox.

index.html

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -129,11 +129,34 @@
129129
<div class="keyfeatures-box-title">Ensemble learning</div>
130130
<div class="keyfeatures-box-text">Ensemble models in the hyperbox-brain toolbox build a set of hyperbox-based learners from a subset of training samples or a subset of both training samples and features. Training subsets of base learners can be formed by stratified random subsampling, resampling, or class-balanced random subsampling. The final predicted results of an ensemble model are an aggregation of predictions from all base learners based on a majority voting mechanism. An intersting characteristic of hyperbox-based models is resulting hyperboxes from all base learners can be merged to formulate a single model.</div>
131131
<p></p></div>
132-
<div class="keyfeatures-box-content keyfeatures-underline"><p></p>
132+
<div class="keyfeatures-box-content keyfeatures-underline">
133+
<p></p>
133134
<div class="keyfeatures-box-title">Multigranularity learning</div>
134135
<div class="keyfeatures-box-text">Multi-granularity learning algorithms can construct classifiers from multiresolution hierarchical granular representations using hyperbox fuzzy sets. This algorithm forms a series of granular inferences hierarchically through many levels of abstraction. An attractive characteristic of these classifiers is that they can maintain a high accuracy in comparison to other fuzzy min-max models at a low degree of granularity based on reusing the knowledge learned from lower levels of abstraction.
135136
</div>
136-
<p></p></div>
137+
<p></p>
138+
</div>
139+
<div class="keyfeatures-box-content keyfeatures-underline">
140+
<p></p>
141+
<div class="keyfeatures-box-title">Learning from both labelled and unlabelled data</div>
142+
<div class="keyfeatures-box-text">One of the exciting features of learning algorithms for the general fuzzy min-max neural network(GFMMNN) is the capability of creating classification boundaries among known classes and clustering data and representing them as hyperboxes in the case that labels are not available. Unlabelled hyperboxes is then possible to be labelled on the basis of the evidence of next incoming input samples. As a result, the GFMMNN models have the ability to learn from the mixed labelled and unlabelled datasets in a native way.
143+
</div>
144+
<p></p>
145+
</div>
146+
<div class="keyfeatures-box-content keyfeatures-underline">
147+
<p></p>
148+
<div class="keyfeatures-box-title">Ability to directly process missing data</div>
149+
<div class="keyfeatures-box-text">Learning algorithms for the general fuzzy min-max neural network supported by the library may classify inputs with missing data directly without the need for replacing or imputing missing values as in other classifiers.
150+
</div>
151+
<p></p>
152+
</div>
153+
<div class="keyfeatures-box-content keyfeatures-underline">
154+
<p></p>
155+
<div class="keyfeatures-box-title">Continual learning of new classes in an incremental manner</div>
156+
<div class="keyfeatures-box-text">Incremental learning algorithms of hyperbox-based models in the **hyperbox-brain** library can grow and accommodate new classes of data without retraining the whole classifier. Incremental learning algorithms themselves can generate new hyperboxes to represent clusters of new data with potentially new class labels both in the middle of normal training procedure and in the operating time where training has been finished. This property is a key feature for smart life-long learning systems.
157+
</div>
158+
<p></p>
159+
</div>
137160
<div class="keyfeatures-box-content keyfeatures-underline"><p></p>
138161
<div class="keyfeatures-box-title">Scikit-learn compatible estimators</div>
139162
<div class="keyfeatures-box-text">The estimators in hyperbox-brain is compatible with the well-known scikit-learn toolbox.

0 commit comments

Comments
 (0)