Skip to content

Commit

Permalink
Merge pull request #4 from lalvim/renamepkg
Browse files Browse the repository at this point in the history
Rename package PLS to PLSRegressor
  • Loading branch information
filipebraida authored Nov 20, 2017
2 parents e04c875 + 5de064a commit 7d89d2e
Show file tree
Hide file tree
Showing 13 changed files with 124 additions and 125 deletions.
7 changes: 3 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ os:
- osx
julia:
- 0.6
- nightly
notifications:
email: false
git:
Expand All @@ -27,9 +26,9 @@ addons:

## uncomment the following lines to override the default test script
#script:
# - julia -e 'Pkg.clone(pwd()); Pkg.build("PLS"); Pkg.test("PLS"; coverage=true)'
# - julia -e 'Pkg.clone(pwd()); Pkg.build("PLSRegressor"); Pkg.test("PLSRegressor"; coverage=true)'
after_success:
# push coverage results to Coveralls
- julia -e 'cd(Pkg.dir("PLS")); Pkg.add("Coverage"); using Coverage; Coveralls.submit(Coveralls.process_folder())'
- julia -e 'cd(Pkg.dir("PLSRegressor")); Pkg.add("Coverage"); using Coverage; Coveralls.submit(Coveralls.process_folder())'
# push coverage results to Codecov
- julia -e 'cd(Pkg.dir("PLS")); Pkg.add("Coverage"); using Coverage; Codecov.submit(Codecov.process_folder())'
- julia -e 'cd(Pkg.dir("PLSRegressor")); Pkg.add("Coverage"); using Coverage; Codecov.submit(Codecov.process_folder())'
2 changes: 1 addition & 1 deletion LICENSE.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
The PLS.jl package is licensed under the MIT "Expat" License:
The PLSRegressor.jl package is licensed under the MIT "Expat" License:

> Copyright (c) 2017: Leandro Alvim.
>
Expand Down
50 changes: 25 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
PLS.jl
PLSRegressor.jl
======

A Partial Least Squares Regressor package. Contains PLS1, PLS2 and Kernel PLS2 NIPALS algorithms.
Expand All @@ -9,41 +9,41 @@ Can be used mainly for regression. However, for classification task, binarizing
|:-------------------------------:|:-----------------------------------------:|
| [![][pkg-0.6-img]][pkg-0.6-url] | [![][travis-img]][travis-url] [![][codecov-img]][codecov-url] |

[travis-img]: https://travis-ci.org/lalvim/PLS.jl.svg?branch=master
[travis-url]: https://travis-ci.org/lalvim/PLS.jl
[travis-img]: https://travis-ci.org/lalvim/PLSRegressor.jl.svg?branch=master
[travis-url]: https://travis-ci.org/lalvim/PLSRegressor.jl

[codecov-img]: http://codecov.io/github/lalvim/PLS.jl/coverage.svg?branch=master
[codecov-url]: http://codecov.io/github/lalvim/PLS.jl?branch=master
[codecov-img]: http://codecov.io/github/lalvim/PLSRegressor.jl/coverage.svg?branch=master
[codecov-url]: http://codecov.io/github/lalvim/PLSRegressor.jl?branch=master

[issues-url]: https://github.com/lalvim/PLS.jl/issues
[issues-url]: https://github.com/lalvim/PLSRegressor.jl/issues

[pkg-0.6-img]: http://pkg.julialang.org/badges/PLS_0.6.svg
[pkg-0.6-url]: http://pkg.julialang.org/?pkg=PLS&ver=0.6
[pkg-0.7-img]: http://pkg.julialang.org/badges/PLS_0.7.svg
[pkg-0.7-url]: http://pkg.julialang.org/?pkg=PLS&ver=0.7
[pkg-0.6-img]: http://pkg.julialang.org/badges/PLSRegressor_0.6.svg
[pkg-0.6-url]: http://pkg.julialang.org/?pkg=PLSRegressor&ver=0.6
[pkg-0.7-img]: http://pkg.julialang.org/badges/PLSRegressor_0.7.svg
[pkg-0.7-url]: http://pkg.julialang.org/?pkg=PLSRegressor&ver=0.7

Install
=======

Pkg.add("PLS")
Pkg.add("PLSRegressor")

Using
=====

using PLS
using PLSRegressor

Examples
========

using PLS
using PLSRegressor

# learning a single target
X_train = [1 2; 2 4; 4 6.0]
Y_train = [4; 6; 8.0]
X_test = [6 8; 8 10; 10 12.0]

model = PLS.fit(X_train,Y_train,nfactors=2)
Y_test = PLS.predict(model,X_test)
model = PLSRegressor.fit(X_train,Y_train,nfactors=2)
Y_test = PLSRegressor.predict(model,X_test)

print("[PLS1] mae error : $(mean(abs.(Y_test .- Y_pred)))")

Expand All @@ -53,23 +53,23 @@ Examples
Y_train = [2 4;4 6;6 8.0]
X_test = [6 8; 8 10; 10 12.0]

model = PLS.fit(X_train,Y_train,nfactors=2)
Y_test = PLS.predict(model,X_test)
model = PLSRegressor.fit(X_train,Y_train,nfactors=2)
Y_test = PLSRegressor.predict(model,X_test)

print("[PLS2] mae error : $(mean(abs.(Y_test .- Y_pred)))")

# nonlinear learning with multiple targets
model = PLS.fit(X_train,Y_train,nfactors=2,kernel="rbf",width=0.1)
Y_test = PLS.predict(model,X_test)
model = PLSRegressor.fit(X_train,Y_train,nfactors=2,kernel="rbf",width=0.1)
Y_test = PLSRegressor.predict(model,X_test)

print("[KPLS] mae error : $(mean(abs.(Y_test .- Y_pred)))")


# if you want to save your model
PLS.save(model,filename="/tmp/pls_model.jld")
PLSRegressor.save(model,filename="/tmp/pls_model.jld")

# if you want to load back your model
model = PLS.load(filename="/tmp/pls_model.jld")
model = PLSRegressor.load(filename="/tmp/pls_model.jld")


What is Implemented
Expand All @@ -87,7 +87,7 @@ What is Upcoming
Method Description
=======

* PLS.fit - learns from input data and its related single target
* PLSRegressor.fit - learns from input data and its related single target
* X::Matrix{:<AbstractFloat} - A matrix that columns are the features and rows are the samples
* Y::Vector{:<AbstractFloat} - A vector with float values.
* nfactors::Int = 10 - The number of latent variables to explain the data.
Expand All @@ -96,8 +96,8 @@ Method Description
* kernel::AbstractString = "rbf" - use a non linear kernel.
* width::AbstractFloat = 1.0 - If you want to z-score columns. Recommended if not z-scored yet.

* PLS.transform - predicts using the learnt model extracted from fit.
* model::PLS.Model - A PLS model learnt from fit.
* PLSRegressor.transform - predicts using the learnt model extracted from fit.
* model::PLSRegressor.Model - A PLS model learnt from fit.
* X::Matrix{:<AbstractFloat} - A matrix that columns are the features and rows are the samples.
* copydata::Bool = true - If you want to use the same input matrix or a copy.

Expand All @@ -123,5 +123,5 @@ regression. Chemometrics and Intelligent Laboratory Systems, 18: 251–
License
=======

The PLS.jl is free software: you can redistribute it and/or modify it under the terms of the MIT "Expat"
The PLSRegressor.jl is free software: you can redistribute it and/or modify it under the terms of the MIT "Expat"
License. A copy of this license is provided in ``LICENSE.md``
4 changes: 2 additions & 2 deletions appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ build_script:
# Need to convert from shallow to complete for Pkg.clone to work
- IF EXIST .git\shallow (git fetch --unshallow)
- C:\projects\julia\bin\julia -e "versioninfo();
Pkg.clone(pwd(), \"PLS\"); Pkg.build(\"PLS\")"
Pkg.clone(pwd(), \"PLSRegressor\"); Pkg.build(\"PLSRegressor\")"

test_script:
- C:\projects\julia\bin\julia -e "Pkg.test(\"PLS\")"
- C:\projects\julia\bin\julia -e "Pkg.test(\"PLSRegressor\")"
6 changes: 3 additions & 3 deletions experiments/curve.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

### Example extracted from a python KPLS implementation: https://github.com/jhumphry/regressions/blob/master/examples/kpls_example.py

using PLS
using PLSRegressor
using Gadfly

srand(1)
Expand All @@ -25,8 +25,8 @@ global best_g = 10
for g in [1,2],
w in linspace(0.01,3,10)
print(".")
model = PLS.fit(X,Y,centralize=true,nfactors=g,kernel="rbf",width=w)
Y_pred = PLS.predict(model,X)
model = PLSRegressor.fit(X,Y,centralize=true,nfactors=g,kernel="rbf",width=w)
Y_pred = PLSRegressor.predict(model,X)
mae = mean(abs.(Y .- Y_pred))
if mae < min_mae
min_mae = mae
Expand Down
12 changes: 6 additions & 6 deletions experiments/housing.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
using MultivariateStats
using PLS
using PLSRegressor

const defdir = PLS.dir("datasets")
const defdir = PLSRegressor.dir("datasets")

function gethousingdata(dir, filename)
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data"
Expand Down Expand Up @@ -38,8 +38,8 @@ end

(xtrn, ytrn, xtst, ytst) = loaddata()

model = PLS.fit(xtrn, ytrn, nfactors = 3)
pred = PLS.predict(model, xtst)
model = PLSRegressor.fit(xtrn, ytrn, nfactors = 3)
pred = PLSRegressor.predict(model, xtst)


println("[PLS] mae error :", mean(abs.(ytst .- pred)))
Expand All @@ -51,5 +51,5 @@ yp = xtst * a + b
println("[LLS] mae error :",mean(abs.(ytst .- yp)))

### if you want to save or load model use this
#PLS.save(model,filename="/tmp/pls_model.jld",modelname="pls_model")
#model = PLS.load(filename="/tmp/pls_model.jld",modelname="pls_model")
#PLSRegressor.save(model,filename="/tmp/pls_model.jld",modelname="pls_model")
#model = PLSRegressor.load(filename="/tmp/pls_model.jld",modelname="pls_model")
2 changes: 1 addition & 1 deletion src/PLS.jl → src/PLSRegressor.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Partial Least Squares (PLS1 and PLS2 NIPALS version)
module PLS
module PLSRegressor

using JLD

Expand Down
2 changes: 1 addition & 1 deletion src/method.jl
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ end


"""
transform(model::PLS.Model; X::Matrix{:<AbstractFloat}; copydata::Bool=true)
transform(model::PLSRegressor.Model; X::Matrix{:<AbstractFloat}; copydata::Bool=true)
A Partial Least Squares predictor.
Expand Down
40 changes: 20 additions & 20 deletions test/kpls_test.jl
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
z_noisy = z_pure + noise
X = collect(x_values)
Y = z_noisy #z_pure
model = PLS.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLS.predict(model,X)
model = PLSRegressor.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLSRegressor.predict(model,X)
@test mean(abs.(Y .- Y_pred)) < 1e-2

end
Expand All @@ -24,14 +24,14 @@

X = [1 2; 2 4; 4.0 6]
Y = [-2; -4; -6.0]
model = PLS.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLS.predict(model,X)
model = PLSRegressor.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLSRegressor.predict(model,X)
@test mean(abs.(Y .- Y_pred)) < 1e-6

X = [1 2; 2 4; 4.0 6]
Y = [2; 4; 6.0]
model = PLS.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLS.predict(model,X)
model = PLSRegressor.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLSRegressor.predict(model,X)
@test mean(abs.(Y .- Y_pred)) < 1e-6

end
Expand All @@ -41,29 +41,29 @@

X = [1; 2; 3.0]
Y = [1 1; 2 2; 3 3.0]
model = PLS.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLS.predict(model,X)
model = PLSRegressor.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLSRegressor.predict(model,X)
@test mean(abs.(Y .- Y_pred)) < 1e-6

X = [1; 2; 3.0]
Y = [1 -1; 2 -2; 3 -3.0]
model = PLS.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLS.predict(model,X)
model = PLSRegressor.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLSRegressor.predict(model,X)
@test mean(abs.(Y .- Y_pred)) < 1e-6

@testset "Linear Prediction Tests " begin


X = [1 2; 2 4; 4 6.0]
Y = [4 2;6 4;8 6.0]
model = PLS.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLS.predict(model,X)
model = PLSRegressor.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLSRegressor.predict(model,X)
@test mean(abs.(Y .- Y_pred)) < 1e-6

X = [1 -2; 2 -4; 4 -6.0]
Y = [-4 -2;-6 -4;-8 -6.0]
model = PLS.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLS.predict(model,X)
model = PLSRegressor.fit(X,Y,nfactors=1,kernel="rbf",width=0.01)
Y_pred = PLSRegressor.predict(model,X)
@test mean(abs.(Y .- Y_pred)) < 1e-6


Expand All @@ -82,14 +82,14 @@ end;
Xtr = [1 -2; 2 -4; 4.0 -6]
Ytr = [-2; -4; -6.0]
Xt = [6 -8; 8 -10; 10.0 -12]
model1 = PLS.fit(Xtr,Ytr,nfactors=1,kernel="rbf",width=0.01)
pred1 = PLS.predict(model1,Xt)
model1 = PLSRegressor.fit(Xtr,Ytr,nfactors=1,kernel="rbf",width=0.01)
pred1 = PLSRegressor.predict(model1,Xt)

PLS.save(model1)
model2 = PLS.load()
PLSRegressor.save(model1)
model2 = PLSRegressor.load()

pred2 = PLS.predict(model2,Xt)
rm(PLS.MODEL_FILENAME)
pred2 = PLSRegressor.predict(model2,Xt)
rm(PLSRegressor.MODEL_FILENAME)
@test all(pred1 .== pred2)


Expand Down
Loading

0 comments on commit 7d89d2e

Please sign in to comment.