From b4206c4b08f31a28c9b81264c45a89eb5d4762a2 Mon Sep 17 00:00:00 2001 From: Lorenzo Date: Tue, 8 Nov 2022 12:15:10 +0000 Subject: [PATCH 1/5] minor fix --- src/ensemble/mod.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/ensemble/mod.rs b/src/ensemble/mod.rs index 161df96..8cebd5c 100644 --- a/src/ensemble/mod.rs +++ b/src/ensemble/mod.rs @@ -7,7 +7,7 @@ //! set and then aggregate their individual predictions to form a final prediction. In classification setting the overall prediction is the most commonly //! occurring majority class among the individual predictions. //! -//! In smartcore you will find implementation of RandomForest - a popular averaging algorithms based on randomized [decision trees](../tree/index.html). +//! In `smartcore` you will find implementation of RandomForest - a popular averaging algorithms based on randomized [decision trees](../tree/index.html). //! Random forests provide an improvement over bagged trees by way of a small tweak that decorrelates the trees. As in bagging, we build a number of //! decision trees on bootstrapped training samples. But when building these decision trees, each time a split in a tree is considered, //! a random sample of _m_ predictors is chosen as split candidates from the full set of _p_ predictors. From a60fdaf235be3a8447b5c436f7669d94bc140bc7 Mon Sep 17 00:00:00 2001 From: Lorenzo Date: Tue, 8 Nov 2022 12:17:04 +0000 Subject: [PATCH 2/5] minor fix --- src/linear/linear_regression.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/linear/linear_regression.rs b/src/linear/linear_regression.rs index 7f6dfad..a5c7699 100644 --- a/src/linear/linear_regression.rs +++ b/src/linear/linear_regression.rs @@ -12,7 +12,7 @@ //! \\[\hat{\beta} = (X^TX)^{-1}X^Ty \\] //! //! the \\((X^TX)^{-1}\\) term is both computationally expensive and numerically unstable. An alternative approach is to use a matrix decomposition to avoid this operation. -//! smartcore uses [SVD](../../linalg/svd/index.html) and [QR](../../linalg/qr/index.html) matrix decomposition to find estimates of \\(\hat{\beta}\\). +//! `smartcore` uses [SVD](../../linalg/svd/index.html) and [QR](../../linalg/qr/index.html) matrix decomposition to find estimates of \\(\hat{\beta}\\). //! The QR decomposition is more computationally efficient and more numerically stable than calculating the normal equation directly, //! but does not work for all data matrices. Unlike the QR decomposition, all matrices have an SVD decomposition. //! From 78bf75b5d8fc3a8cf044071896991bdf012fc128 Mon Sep 17 00:00:00 2001 From: Lorenzo Date: Tue, 8 Nov 2022 12:17:32 +0000 Subject: [PATCH 3/5] minor fix --- src/linear/logistic_regression.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/linear/logistic_regression.rs b/src/linear/logistic_regression.rs index e8c08d8..8bf65bf 100644 --- a/src/linear/logistic_regression.rs +++ b/src/linear/logistic_regression.rs @@ -5,7 +5,7 @@ //! //! \\[ Pr(y=1) \approx \frac{e^{\beta_0 + \sum_{i=1}^n \beta_iX_i}}{1 + e^{\beta_0 + \sum_{i=1}^n \beta_iX_i}} \\] //! -//! smartcore uses [limited memory BFGS](https://en.wikipedia.org/wiki/Limited-memory_BFGS) method to find estimates of regression coefficients, \\(\beta\\) +//! `smartcore` uses [limited memory BFGS](https://en.wikipedia.org/wiki/Limited-memory_BFGS) method to find estimates of regression coefficients, \\(\beta\\) //! //! Example: //! From b71c7b49cb59d18d9ad4a97370832a4f96c9f82e Mon Sep 17 00:00:00 2001 From: Lorenzo Date: Tue, 8 Nov 2022 12:18:03 +0000 Subject: [PATCH 4/5] minor fix --- src/linear/ridge_regression.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/linear/ridge_regression.rs b/src/linear/ridge_regression.rs index e03948d..6bd5595 100644 --- a/src/linear/ridge_regression.rs +++ b/src/linear/ridge_regression.rs @@ -12,7 +12,7 @@ //! where \\(\alpha \geq 0\\) is a tuning parameter that controls strength of regularization. When \\(\alpha = 0\\) the penalty term has no effect, and ridge regression will produce the least squares estimates. //! However, as \\(\alpha \rightarrow \infty\\), the impact of the shrinkage penalty grows, and the ridge regression coefficient estimates will approach zero. //! -//! smartcore uses [SVD](../../linalg/svd/index.html) and [Cholesky](../../linalg/cholesky/index.html) matrix decomposition to find estimates of \\(\hat{\beta}\\). +//! `smartcore` uses [SVD](../../linalg/svd/index.html) and [Cholesky](../../linalg/cholesky/index.html) matrix decomposition to find estimates of \\(\hat{\beta}\\). //! The Cholesky decomposition is more computationally efficient and more numerically stable than calculating the normal equation directly, //! but does not work for all data matrices. Unlike the Cholesky decomposition, all matrices have an SVD decomposition. //! From a4097fce152ece11f6bb4f1ce7b4636f54cfcdd6 Mon Sep 17 00:00:00 2001 From: Lorenzo Date: Tue, 8 Nov 2022 12:18:35 +0000 Subject: [PATCH 5/5] minor fix --- src/metrics/auc.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/metrics/auc.rs b/src/metrics/auc.rs index 5848fbc..0a7ddf4 100644 --- a/src/metrics/auc.rs +++ b/src/metrics/auc.rs @@ -2,7 +2,7 @@ //! Computes the area under the receiver operating characteristic (ROC) curve that is equal to the probability that a classifier will rank a //! randomly chosen positive instance higher than a randomly chosen negative one. //! -//! smartcore calculates ROC AUC from Wilcoxon or Mann-Whitney U test. +//! `smartcore` calculates ROC AUC from Wilcoxon or Mann-Whitney U test. //! //! Example: //! ```