- 文章信息
- 作者: kaiwu
- 点击数:605
R Notebook
1.参考资料
1.1 Modern optimization with R
Cortez, P. (2021). Modern optimization with R.Springer.
cover_modern_optimization_with_R.jpg
1.2 Modeling and Solving Linear Programming with R
1.3 Linear Programming with R: Exploring the “lpSolve” R package
Roberto Salazar,Nov 17, 2019
Linear Programming with R: Exploring the “lpSolve” R package
1.4 lpSolveAPI Package Users Guide by Kjell Konis
2. prepraration
2.1 import libraries
# Import lpSolve package
library(lpSolve)
library(XLConnect)
2.2 the problem: 班次安排问题
## 2.3 连接Excel文件
mybook<-loadWorkbook("D:/kedu/teaching_datasets/Excel_model/sm_solver.xls")
rsheets<-getSheets(mybook)
rsheets
[1] "table-chair" "beef" "transport" "staff"
3. linear programming
3.1 Set coefficients of the objective function
设定目标函数,因为是求和,所以矩阵是1,1,1,1,1,1 matrix在这里非常重要 研究一下如何生成重复的相同元素
#f.obj <- c(4, 2)
f.obj <-as.matrix(rep(1,each=7))
3.2 Set matrix corresponding to coefficients of constraints by rows
Do not consider the non-negative constraint; it is automatically assumed
就是员工工作的可用矩阵 注意:这里面通常包含标题,默认header=TRUE,那么startRow=7
#f.con <- matrix(c(5, 15,20, 5), nrow = 2, byrow = TRUE)
f.con <-as.matrix(readWorksheet(mybook, sheet = "staff", startRow = 8, endRow = 14,
startCol = 17, endCol = 23,header=FALSE))
3.3 Set unequality signs
设置不等式符号 可以每行设置不同的符号
f.dir <- as.matrix(rep(">=",each=7))
3.4 Set right hand side coefficients
约束条件的范围
#f.rhs <- c(50,40)
f.rhs <-as.matrix(readWorksheet(mybook, sheet = "staff", startRow = 8, endRow =14,
startCol = 27, endCol = 27,header=FALSE))
3.5 设定变量取整数 f.intvec,例如f.intvec <- c(1,2)表示x1,x2取整数
如果自变量全都是整数,那么 all.int = TRUE就可以了
#f.intvec <- c(1,2)
3.6 Final value (z)
计算结果
#report_lp<-lp("min", f.obj, f.con, f.dir, f.rhs,int.vec = f.intvec)
# 部分变量取整数:int.vec = f.intvec
report_lp<-lp("min", f.obj, f.con, f.dir, f.rhs, all.int = TRUE)
report_lp
Success: the objective function is 9
report_lp$objval
[1] 9
# output the final value
writeWorksheet(mybook,report_lp$objval,sheet = "staff", startRow =16,startCol = 25,header = FALSE)
saveWorkbook(mybook)
3.7 Variables final values
report_solution<-report_lp$solution
report_solution
[1] 0 1 0 0 1 3 4
## 保存变量取值
## 结果是一个矩阵(1列),所以为了在excel变为1行,需要转置,t()
writeWorksheet(mybook,t(report_solution),sheet = "staff", startRow =16,startCol = 17,header = FALSE)
saveWorkbook(mybook)
3.8 Sensitivities
敏感度分析
report_lp<-lp("min", f.obj, f.con, f.dir, f.rhs, all.int = TRUE,compute.sens=TRUE)
report_lp$sens.coef.from
[1] 1e+00 -1e+30 0e+00 0e+00 0e+00 0e+00 0e+00
report_lp$sens.coef.to
[1] 1.000000e+30 1.000000e+30 1.333333e+00 1.000000e+00 1.333333e+00
[6] 1.333333e+00 1.333333e+00
3.9 Dual Values (first dual of the constraints and then dual of the variables)
Duals of the constraints and variables are mixed
report_lp$duals
[1] 0 1 0 0 0 0 0 0 1 0 0 0 0 0
3.10 Duals lower and upper limits
report_lp$duals.from
[1] -1e+30 -1e+30 7e+00 6e+00 6e+00 5e+00 4e+00 -1e+30 -1e+30 -1e+30
[11] -1e+30 -1e+30 -1e+30 -1e+30
report_lp$duals.to
[1] 1.000000e+30 1.000000e+30 8.333333e+00 8.666667e+00 8.666667e+00
[6] 9.000000e+00 9.333333e+00 1.250000e+00 2.500000e-01 1.000000e+30
[11] 1.000000e+30 1.000000e+30 1.000000e+30 1.000000e+30
- 文章信息
- 作者: kaiwu
- 点击数:569
https://www.ted.com/talks/juan_enriquez_how_technology_changes_our_sense_of_right_and_wrong
- 文章信息
- 作者: kaiwu
- 点击数:630
survey package
http://r-survey.r-forge.r-project.org/survey/
https://www.stat.auckland.ac.nz/people/tlum005
http://faculty.washington.edu/tlumley/cv.html
{pdf=images/openfiles/complex_survey_article.pdf|100%|700|complex_survey_article.pdf}
http://r-survey.r-forge.r-project.org/svybook/
package
{pdf=http://cran.fhcrc.org/web/packages/survey/survey.pdf|100%|700|http://cran.fhcrc.org/web/packages/survey/survey.pdf}
The Comprehensive R Archive Network
https://faculty.washington.edu/tlumley/old-survey/survey-wss.pdf
https://campus.datacamp.com/courses/analyzing-survey-data-in-r
- 文章信息
- 作者: kaiwu
- 点击数:619
Free Datasets
https://r-dir.com/reference/datasets.html
World Bank Data - Literally hundreds of datasets spanning many decades, sortable by topic or country. Data is downloadable in Excel or XML formats, or you can make API calls. This is an outstanding resource.
Gapminder - Hundreds of datasets on world health, economics, population, etc. All of it is viewable online within Google Docs, and downloadable as spreadsheets.
The Data Hub - Hosted by CKAN. Most of these datasets come from the government.
Datamob - List of public datasets.
Numbrary - Lists of datasets.
Kaggle - Kaggle is a site that hosts data mining competitions. Each competition provides a data set that's free for download.
SNAP - Stanford's Large Network Dataset Collection. This list has several datasets related to social networking. Lots of fun in here!
KONECT - The Koblenz Network Collection. Several datasets related to social networking & Wikipedia.
Million Song Dataset - This is a collection of audio features and metadata for a million contemporary popular music tracks.
Energy Information Administration - This site offers a number of datasets on energy production, consumption, sources, etc.
GeoDa Center - This is a collection of geospatial datasets offered by Arizona State Univerisity's Center for Geospatial Analysis & Computation.
Reddit Datasets - This last one isn't a dataset itself, but rather a social news site devoted to datasets. It's updated regularly with news about newly available datasets.
Quandl - This is a web-based front end to a number of public data sets. What's nice about this website is that it allows for the combination of data from a number of sources, and can export the data in a number of formats.
1,001 Datasets - This is a list of lists of datasets. There's not much organization here, but there really are a LOT of datasets. Dive in and have fun.
Yahoo! Webscope - A reference library of interesting and scientifically useful datasets for non-commercial use by academics and other scientists.
Time Series Data Library - Curated by Professor Rob Hyndman of Monash University in Australia, this is a collection of over 500 datasets containing time-series data, organized by category.
Awesome Public Datasets - Curated list of hundreds of public datasets, organized by topic.
Common Crawl - Massive dataset of billions of pages scraped from the web. The data itself is on Amazon Public Datasets, so its easy to load it into an EC2 instance there. The dataset is updated with a new scrape about once per month.
Amazon Public Datasets - Collection of datasets that are ready to be loaded into an EC2 instance.