Release 0.3 (#235)
This commit is contained in:
@@ -12,7 +12,7 @@
|
||||
//! \\[\hat{\beta} = (X^TX)^{-1}X^Ty \\]
|
||||
//!
|
||||
//! the \\((X^TX)^{-1}\\) term is both computationally expensive and numerically unstable. An alternative approach is to use a matrix decomposition to avoid this operation.
|
||||
//! SmartCore uses [SVD](../../linalg/svd/index.html) and [QR](../../linalg/qr/index.html) matrix decomposition to find estimates of \\(\hat{\beta}\\).
|
||||
//! `smartcore` uses [SVD](../../linalg/svd/index.html) and [QR](../../linalg/qr/index.html) matrix decomposition to find estimates of \\(\hat{\beta}\\).
|
||||
//! The QR decomposition is more computationally efficient and more numerically stable than calculating the normal equation directly,
|
||||
//! but does not work for all data matrices. Unlike the QR decomposition, all matrices have an SVD decomposition.
|
||||
//!
|
||||
@@ -113,7 +113,6 @@ pub struct LinearRegression<
|
||||
> {
|
||||
coefficients: Option<X>,
|
||||
intercept: Option<TX>,
|
||||
solver: LinearRegressionSolverName,
|
||||
_phantom_ty: PhantomData<TY>,
|
||||
_phantom_y: PhantomData<Y>,
|
||||
}
|
||||
@@ -210,7 +209,6 @@ impl<
|
||||
Self {
|
||||
coefficients: Option::None,
|
||||
intercept: Option::None,
|
||||
solver: LinearRegressionParameters::default().solver,
|
||||
_phantom_ty: PhantomData,
|
||||
_phantom_y: PhantomData,
|
||||
}
|
||||
@@ -276,7 +274,6 @@ impl<
|
||||
Ok(LinearRegression {
|
||||
intercept: Some(*w.get((num_attributes, 0))),
|
||||
coefficients: Some(weights),
|
||||
solver: parameters.solver,
|
||||
_phantom_ty: PhantomData,
|
||||
_phantom_y: PhantomData,
|
||||
})
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
//!
|
||||
//! \\[ Pr(y=1) \approx \frac{e^{\beta_0 + \sum_{i=1}^n \beta_iX_i}}{1 + e^{\beta_0 + \sum_{i=1}^n \beta_iX_i}} \\]
|
||||
//!
|
||||
//! SmartCore uses [limited memory BFGS](https://en.wikipedia.org/wiki/Limited-memory_BFGS) method to find estimates of regression coefficients, \\(\beta\\)
|
||||
//! `smartcore` uses [limited memory BFGS](https://en.wikipedia.org/wiki/Limited-memory_BFGS) method to find estimates of regression coefficients, \\(\beta\\)
|
||||
//!
|
||||
//! Example:
|
||||
//!
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
//! where \\(\alpha \geq 0\\) is a tuning parameter that controls strength of regularization. When \\(\alpha = 0\\) the penalty term has no effect, and ridge regression will produce the least squares estimates.
|
||||
//! However, as \\(\alpha \rightarrow \infty\\), the impact of the shrinkage penalty grows, and the ridge regression coefficient estimates will approach zero.
|
||||
//!
|
||||
//! SmartCore uses [SVD](../../linalg/svd/index.html) and [Cholesky](../../linalg/cholesky/index.html) matrix decomposition to find estimates of \\(\hat{\beta}\\).
|
||||
//! `smartcore` uses [SVD](../../linalg/svd/index.html) and [Cholesky](../../linalg/cholesky/index.html) matrix decomposition to find estimates of \\(\hat{\beta}\\).
|
||||
//! The Cholesky decomposition is more computationally efficient and more numerically stable than calculating the normal equation directly,
|
||||
//! but does not work for all data matrices. Unlike the Cholesky decomposition, all matrices have an SVD decomposition.
|
||||
//!
|
||||
@@ -197,7 +197,6 @@ pub struct RidgeRegression<
|
||||
> {
|
||||
coefficients: Option<X>,
|
||||
intercept: Option<TX>,
|
||||
solver: Option<RidgeRegressionSolverName>,
|
||||
_phantom_ty: PhantomData<TY>,
|
||||
_phantom_y: PhantomData<Y>,
|
||||
}
|
||||
@@ -259,7 +258,6 @@ impl<
|
||||
Self {
|
||||
coefficients: Option::None,
|
||||
intercept: Option::None,
|
||||
solver: Option::None,
|
||||
_phantom_ty: PhantomData,
|
||||
_phantom_y: PhantomData,
|
||||
}
|
||||
@@ -367,7 +365,6 @@ impl<
|
||||
Ok(RidgeRegression {
|
||||
intercept: Some(b),
|
||||
coefficients: Some(w),
|
||||
solver: Some(parameters.solver),
|
||||
_phantom_ty: PhantomData,
|
||||
_phantom_y: PhantomData,
|
||||
})
|
||||
|
||||
Reference in New Issue
Block a user