minor fixes to doc

This commit is contained in:
Lorenzo (Mec-iS)
2022-11-08 12:21:34 +00:00
parent 3c4a807be8
commit 6c6f92697f
6 changed files with 6 additions and 6 deletions
+1 -1
View File
@@ -4,7 +4,7 @@
//! In a feedback loop you build your model first, then you get feedback from metrics, improve it and repeat until your model achieve desirable performance. //! In a feedback loop you build your model first, then you get feedback from metrics, improve it and repeat until your model achieve desirable performance.
//! Evaluation metrics helps to explain the performance of a model and compare models based on an objective criterion. //! Evaluation metrics helps to explain the performance of a model and compare models based on an objective criterion.
//! //!
//! Choosing the right metric is crucial while evaluating machine learning models. In smartcore you will find metrics for these classes of ML models: //! Choosing the right metric is crucial while evaluating machine learning models. In `smartcore` you will find metrics for these classes of ML models:
//! //!
//! * [Classification metrics](struct.ClassificationMetrics.html) //! * [Classification metrics](struct.ClassificationMetrics.html)
//! * [Regression metrics](struct.RegressionMetrics.html) //! * [Regression metrics](struct.RegressionMetrics.html)
+1 -1
View File
@@ -7,7 +7,7 @@
//! Splitting data into multiple subsets helps us to find the right combination of hyperparameters, estimate model performance and choose the right model for //! Splitting data into multiple subsets helps us to find the right combination of hyperparameters, estimate model performance and choose the right model for
//! the data. //! the data.
//! //!
//! In smartcore a random split into training and test sets can be quickly computed with the [train_test_split](./fn.train_test_split.html) helper function. //! In `smartcore` a random split into training and test sets can be quickly computed with the [train_test_split](./fn.train_test_split.html) helper function.
//! //!
//! ``` //! ```
//! use smartcore::linalg::basic::matrix::DenseMatrix; //! use smartcore::linalg::basic::matrix::DenseMatrix;
+1 -1
View File
@@ -1,6 +1,6 @@
//! # K Nearest Neighbors Classifier //! # K Nearest Neighbors Classifier
//! //!
//! smartcore relies on 2 backend algorithms to speedup KNN queries: //! `smartcore` relies on 2 backend algorithms to speedup KNN queries:
//! * [`LinearSearch`](../../algorithm/neighbour/linear_search/index.html) //! * [`LinearSearch`](../../algorithm/neighbour/linear_search/index.html)
//! * [`CoverTree`](../../algorithm/neighbour/cover_tree/index.html) //! * [`CoverTree`](../../algorithm/neighbour/cover_tree/index.html)
//! //!
+1 -1
View File
@@ -9,7 +9,7 @@
//! SVM is memory efficient since it uses only a subset of training data to find a decision boundary. This subset is called support vectors. //! SVM is memory efficient since it uses only a subset of training data to find a decision boundary. This subset is called support vectors.
//! //!
//! In SVM distance between a data point and the support vectors is defined by the kernel function. //! In SVM distance between a data point and the support vectors is defined by the kernel function.
//! smartcore supports multiple kernel functions but you can always define a new kernel function by implementing the `Kernel` trait. Not all functions can be a kernel. //! `smartcore` supports multiple kernel functions but you can always define a new kernel function by implementing the `Kernel` trait. Not all functions can be a kernel.
//! Building a new kernel requires a good mathematical understanding of the [Mercer theorem](https://en.wikipedia.org/wiki/Mercer%27s_theorem) //! Building a new kernel requires a good mathematical understanding of the [Mercer theorem](https://en.wikipedia.org/wiki/Mercer%27s_theorem)
//! that gives necessary and sufficient condition for a function to be a kernel function. //! that gives necessary and sufficient condition for a function to be a kernel function.
//! //!
+1 -1
View File
@@ -20,7 +20,7 @@
//! //!
//! Where \\( m \\) is a number of training samples, \\( y_i \\) is a label value (either 1 or -1) and \\(\langle\vec{w}, \vec{x}_i \rangle + b\\) is a decision boundary. //! Where \\( m \\) is a number of training samples, \\( y_i \\) is a label value (either 1 or -1) and \\(\langle\vec{w}, \vec{x}_i \rangle + b\\) is a decision boundary.
//! //!
//! To solve this optimization problem, smartcore uses an [approximate SVM solver](https://leon.bottou.org/projects/lasvm). //! To solve this optimization problem, `smartcore` uses an [approximate SVM solver](https://leon.bottou.org/projects/lasvm).
//! The optimizer reaches accuracies similar to that of a real SVM after performing two passes through the training examples. You can choose the number of passes //! The optimizer reaches accuracies similar to that of a real SVM after performing two passes through the training examples. You can choose the number of passes
//! through the data that the algorithm takes by changing the `epoch` parameter of the classifier. //! through the data that the algorithm takes by changing the `epoch` parameter of the classifier.
//! //!
+1 -1
View File
@@ -11,7 +11,7 @@
//! //!
//! where \\(\hat{y}_{Rk}\\) is the mean response for the training observations withing region _k_. //! where \\(\hat{y}_{Rk}\\) is the mean response for the training observations withing region _k_.
//! //!
//! smartcore uses recursive binary splitting approach to build \\(R_1, R_2, ..., R_K\\) regions. The approach begins at the top of the tree and then successively splits the predictor space //! `smartcore` uses recursive binary splitting approach to build \\(R_1, R_2, ..., R_K\\) regions. The approach begins at the top of the tree and then successively splits the predictor space
//! one predictor at a time. At each step of the tree-building process, the best split is made at that particular step, rather than looking ahead and picking a split that will lead to a better //! one predictor at a time. At each step of the tree-building process, the best split is made at that particular step, rather than looking ahead and picking a split that will lead to a better
//! tree in some future step. //! tree in some future step.
//! //!