forked from Q-shick/washington_dc_property
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdc_house_price.Rmd
911 lines (720 loc) · 43.9 KB
/
dc_house_price.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
---
title: "Washington D.C. Property Price Analysis and Prediction"
output:
html_document:
df_print: paged
---
## Overview
House price is considered to be difficult to predict as there are so many variables in the market. However, we can guess the price with a series of major factors such as size, area, quality and so on. Although they would rather be a factrue of numerous factors existing, these quantitative and qualitative variables are useful to narrow down the price in a certain range which is incomparably better than just mean or median. In the course of this project, we first analyze the observations to get some insight for modeling as well as to filter some data like influential points. Once a linear model is sought, we need to evaluate it with techniques like residual analysis to possibly remove insignificant features. Lastly, we build a predictive model and measure its performance with RMSE.
> * [Preparation](#preparation)
> - Dataset
> - Rows of Interest
> - Dependent Variable
> - Outliers
> - Drop Variables
> * [Data Analysis](#data analysis)
> - Correlation Matrix
> - Transformation
> - Box Plot
> - Facet Plot
> * [Diagnostic Analysis](#diagnostic analysis)
> - Plots and Influential Points
> - Remove Influential Points
> * [Best Model](#best model)
> - Adjusted R-squared
> - AIC
> * [Prediction](#prediction)
> - Root Mean Square Error
> * [Conclusion](#conclusion)
> * [Reference](#reference)
------
\
## Preparation
The data set used for this project is [D.C. Residential Properties](https://www.kaggle.com/christophercorrea/dc-residential-properties) which is refined from the original source on [D.C. Open Data](http://opendata.dc.gov/), and details of the columns can be found on [Metadata](https://www.arcgis.com/sharing/rest/content/items/c5fb3fbe4c694a59a6eef7bf5f8bc49a/info/metadata/metadata.xml?format=default&output=html) page. Before analyzing the data, each column needs to be examined to determine whether to include it in a model, also to find outliers that could have a negative influence on modeling. This process starts with calling the following libraries required.
```{r warning=FALSE, message=FALSE}
library(dplyr) # df manipulation
library(ggplot2) # graphical plot
library(GGally) # ggplot extension
library(grid) # grid plot
library(gridExtra) # grid extension
library(knitr) # pretty print
library(kableExtra) # print styling
library(stringr) # handle strings
library(ggmap) # geographical map
library(corrplot) # correlation matrix
library(car) # VIF
library(caret) # data set split
library(lmtest) # non-constant variance test
library(MASS) # AIC
library(leaps) # model selection
library(faraway) # max adjusted r
library(DAAG) # cross-validation
select <- dplyr::select
```
\
### __Dataset Overview__
The next step is reading the data set and taking a look at the structure. This step focuses on having an overall idea what the columns are. Columns with redundant information, ID columns, or empty columns can be found and dropped. Also, such data type is checked that some of them could be converted to a proper type.
```{r}
# read the dataset
house.price <- read.csv('DC_Properties.csv', na.strings = c("", "NA"))
```
```{r}
# show column structures
str(house.price)
```
```{r}
# drop ID, single value, redundant columns
house.price <- select(house.price, -c("X.1", "CITY", "STATE", "X", "Y"))
# ZIPCODE, USECODE are more of categorical data
house.price$ZIPCODE <- as.factor(house.price$ZIPCODE)
house.price$USECODE <- as.factor(house.price$USECODE)
house.price$CENSUS_TRACT <- as.factor(house.price$CENSUS_TRACT)
```
```{r}
# show first a few rows
kable(head(house.price, n = 10)) %>%
kable_styling() %>%
scroll_box(width = "100%", height = "500px")
```
\
### __Preprocessing: *Rows of Interest*__
There are two types of properties in the data set, and they have such a distinctive pattern that separate studies need to be done. At this time, the study focuses on 'Residential' type which is majority of observations. Some factor that can't be controlled is that house values fluctuate with time. In addition, prices might need some correction to today's values, which could be very challenging but possibly only to make it more complicated. For that reason, the cases limits to the recent two years data for the goal to analyze the *latest* and *general* trend.
```{r}
# residential price
print(summary(house.price[house.price$SOURCE == "Residential", 'PRICE']))
# condominium price
print(summary(house.price[house.price$SOURCE == "Condominium", 'PRICE']))
```
```{r}
# reduce it to 'Residential' type
house.price <- house.price[house.price$SOURCE == "Residential", ]
# cases density histogram by sale date
house.price$SALEDATE <- substr(house.price$SALEDATE, 0, 10)
house.price$SALEDATE <- as.Date(house.price$SALEDATE)
sale.year <- as.numeric(substr(house.price[, "SALEDATE"], 0, 4))
sale.year <- sale.year[!is.na(sale.year)]
hist(sale.year,
breaks = c(-Inf, seq(1990, 2018, 1)),
xlim = c(1990, 2018),
main = "Sales by Year",
xlab = "Year",
border = 'blue', col = 'green', alpha = 0.5,
prob = T)
axis(side = 1, at = c(2015, 2018))
lines(density(sale.year), col = 'red')
```
There are more observations as it gets closer to the recent. This is good news as the survey seeks for the recent trend. This figure also shows a random pattern that house markets are active one season and become loose next season. It will be very difficult to make use of old data except for finding a historical pattern. The data starting from 2016 still covers 20% of all.
```{r}
quantile(sale.year, probs = seq(0, 1, 0.1))
```
```{r}
# drop sales made before 2016.01.01.
house.price <- subset(house.price, SALEDATE > as.Date("2015-12-31"))
```
\
### __Dependent Variable: *Price*__
It would be reasonable to investigate house prices as these are the targets to be analyzed and predicted. Looking into the dependent variable would help with having an idea for independent variables too since the goal is to define their relationships. The following histogram gives a basic idea how prices are distributed.
```{r warning=FALSE}
# drop rows with price as NA
house.price <- house.price[complete.cases(house.price[ , "PRICE"]), ]
# histogram independent variable "Price"
ggplot(house.price, aes(x = PRICE)) +
geom_histogram(binwidth = 20000,
fill = "blue",
alpha = .25) +
labs(title = "Washington D.C House Price Histogram") +
labs(x = "Price", y = "Count") +
theme(plot.title = element_text(hjust = 0.5)) +
xlim(c(0, 2000000)) +
geom_vline(aes(xintercept = mean(PRICE), color = "mean"),
linetype = "dashed", size = 1) +
geom_vline(aes(xintercept = median(PRICE), color = "median"),
linetype = "dashed", size = 1) +
scale_color_manual(name = "", values = c(mean = "pink", median = "green"))
```
From both the shape and statistics, house price data are skewed. Most of the houses are in a certain range with a few expensive houses. However, transformation might be carefully considered, asking for more evidence than just histogram. Again, the following statistics shows that prices are skewed with extremely large values, even if /$10 does not make sense either. In addition, it would be very difficult to buy a house in Washington D.C. from the median price.
```{r}
summary(house.price$PRICE)
```
\
### __On the Map__
One of the best ways to have a sense of data is visualization. Since the data could be fundamentally geographic, it would be a good attemp to visualize prices on the Washington D.C. map. On the map, the darker dots are the more expensive.
```{r include=FALSE}
sbbox <- make_bbox(lon = house.price$LONGITUDE, lat = house.price$LATITUDE, f = 0.1)
dc_map <- get_stamenmap(sbbox, zoom = 12, maptype = "toner-lite")
price_group <- c(seq(0, 1000000, by = 200000), Inf)
price_label <- c("<200k", "<400k", "<600k", "<800k", "<1mil", ">1mil")
```
```{r warning=FALSE}
ggmap(dc_map) +
geom_point(data = house.price,
aes(LONGITUDE, LATITUDE,
color = cut(PRICE, price_group)),
size = 0.3, alpha = 0.3) +
labs(title = "Washington D.C House Price Heat Map") +
theme(axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
axis.text.y = element_blank(),
axis.ticks.y = element_blank()) +
scale_color_brewer(palette = "Oranges",
name = "Price Range",
labels = price_label)
```
The areas with no dots are mostly hills or other types where houses can't be build on. Certainly, there are some expensive areas and some cheaper areas. Washington D.C. is consisted of [8 wards](https://www.google.com/imgres?imgurl=https://planning.dc.gov/sites/default/files/dc/sites/op/page_content/images/wards_small.png&imgrefurl=https://planning.dc.gov/page/neighborhood-planning-01&h=400&w=328&tbnid=vgqfxqYTZrngZM:&q=dc+wards&tbnh=160&tbnw=130&usg=AI4_-kQ_Fjd1w9f6GPhDXm-kpkWbaibCEw&vet=12ahUKEwjZgMLfjInfAhVGs6wKHYdICWUQ9QEwAHoECAoQBg..i&docid=5swXqI-O8oONaM&sa=X&ved=2ahUKEwjZgMLfjInfAhVGs6wKHYdICWUQ9QEwAHoECAoQBg) of which ward 2 and 3 are where dots are darker while the rest are brighter. But ward 6 in the middle is a bit ambiguous to tell because its left is dark but the right is not. Ward is actually one of the columns so it will be studied by each ward.
\
### __Handle Outliers__
As shown on the histogram, there are a series of very expensive houses. These probably would interfere with modeling, thus, must be handled before moving forward. Outliers here are defined as observations over or less than IQR*1.5. The lower cut-off, however, is negative leaving only the expensive houses as outliers.
$$Interquantile\ Range\ (IQR)=Quantile\ 1-\ Quantile\ 2$$
```{r}
lowerq <- quantile(house.price$PRICE)[2]
upperq <- quantile(house.price$PRICE)[4]
iqr <- upperq - lowerq
lower.threshold <- lowerq - (iqr*1.5)
upper.threshold <- upperq + (iqr*1.5)
cat("Lower Outlier Threshold:", lower.threshold,
"\nUpper Outlier Threshold:", upper.threshold)
```
```{r}
house.price <- house.price[house.price$PRICE < upper.threshold, ]
```
\
### __Preprocessing: *Drop Variables*__
As explained, some variables do not have meaningful or unique information. Although a variable has its own information, it could still be not useful if some other variable can replace it or have better information. The data set has so many geographical data which come in different forms but have a common ground. Before taking care of that, *NA* values should be controlled.
```{r}
# no longer needed
house.price <- select(house.price, -c("SOURCE", "SALEDATE"))
```
```{r}
# count NA for each variable
na_count <- sapply(house.price, function(y) {
sum(length(which(is.na(y))))
})
# all the variables with NA's
na_count <- na_count[na_count != 0]
# variable and the number of NA's
print(na_count)
```
"LIVING GBA" and "CMPLX NUM" are completely empty, suggesting that they are typical data of condominiums which are excluded at the beggining. Therefore, these two should simply be dropped. "YR RMDL" have about one third empty and this must be because not all the houses have been remodeled. While dropping is not always the best, the solution for this would be type conversion so that it is either a *yes* or a *no* as to whether remodeled or not. The rest are mostly geographical columns. While neighborhood is accepted as a good indicator for house prices, there are 57 different of them to make too many levels. As the final model will be a linear model, having too many category is susceptible to overfitting and losing general trends. Most importantly ward seems to be very useful as confirmed on the map.
```{r}
# drop empty variables
house.price <- select(house.price, -c("CMPLX_NUM", "LIVING_GBA"))
# transform YR_RMDL by encoding into Yes or No
house.price$RMDL <- ifelse(is.na(house.price$YR_RMDL), "N", "Y")
house.price$RMDL <- as.factor(house.price$RMDL)
# drop geographical variables except for WARD
house.price <- select(house.price, -c("ZIPCODE", "USECODE", "SQUARE", "NATIONALGRID",
"ASSESSMENT_NBHD", "ASSESSMENT_SUBNBHD",
"CENSUS_BLOCK", "CENSUS_TRACT",
"QUADRANT", "FULLADDRESS",
"LONGITUDE", "LATITUDE"))
```
```{r}
# fine tuning
print(levels(house.price$GIS_LAST_MOD_DTTM))
house.price <- select(house.price, -c("GIS_LAST_MOD_DTTM", "YR_RMDL"))
# drop small number of NA's (AYB, STORIES...)
house.price <- house.price[complete.cases(house.price), ]
```
------
\
## __Data Analysis__
This part concentrates on relationships among variables. Separate analysis will be conducted by data type, numerical and categorical, and then another for both of them together. For numerical types, correlation matrix will be drawn not only to find relationship with dependent variable but also to catch multicollinearity patterns. For categorical types, box plots will be used to see whether each category has an ability to divide house prices and what values are useful if so. Box plot also can help to decide whether to reduce number of values in a category.
### __Correlation Matrix__
```{r}
numeric_cols <- names(select_if(house.price, is.numeric))
print(numeric_cols)
```
```{r}
corr <- cor(house.price[, numeric_cols])
corrplot.mixed(corr, number.cex = .75, tl.pos = "lt")
```
"SALE NUM" has no relationship with any others, thus, it can be easily gone. "NUM UNITS" and "KITCHENS" have relationships with some variables but amlost none with "PRICE". Our target "PRICE" has a decent relationship with the rest, especially with "GBA" and room-related variables. However, there are multicollinearity issues such as "ROOMS" and "BEDRM". These should be handled first, and then another matrix should be drawn.
```{r}
# drop unrelated variables
house.price <- subset(house.price, select = -c(SALE_NUM, BLDG_NUM, NUM_UNITS,
KITCHENS, LANDAREA))
# test for multicollinearity of numerical variables
VIF.model <- lm(PRICE ~ BATHRM + HF_BATHRM + ROOMS + BEDRM +
AYB + EYB + STORIES + GBA + FIREPLACES,
data = house.price)
print(vif(VIF.model))
```
Variance inflation factor (VIF) measures if variables are causing multicollinearity problems. Usually, VIF equals to 5 up to 10 is the limit which variables should not go over. Since we don't see such a variable, we can keep them and draw another correlation matrix.
$$VIF_i=\frac{1}{1-R_i^2}$$
```{r}
# final correaltion table
numeric_cols <- names(select_if(house.price, is.numeric))
corr <- cor(house.price[, numeric_cols])
corrplot.mixed(corr, number.cex = .75, tl.pos = "lt")
```
Even though there are still moderate multicollinearity issues, these are under the serious level and will be handled again during modeling. Again, house prices seem to have high correlations with area-related variables, which comes no surprise. Still, quality-related factors such as year built or fire place would mildly affect house prices. Although there are higher numbers on the matrix, "PRICE" has some relationships uniformly with the variables when the same can't be applied to other variables.
\
### __Scatter Plot__
Scatter plot is basic to find a pattern between two numerical variables. Either going up or down, a slope means a relationship. When units are so different among variables, normalization could be considered or it might mislead interpretation if they are put together. Another way to deal with this is to show similar variables together instead of the complicated normalization process. Variables related with rooms are in the similar range, therefore, they are put together first. Next, variables related with years and the rest are drawn on the same pane.
```{r}
# define a function for repeating uses
house.price_scatter <- function(var, x_start, x_lim) {
ggplot(house.price, aes_string(var, "PRICE", fill = var)) +
geom_point(color = "darkgreen", shape = 1, alpha = 0.1,
size = 0.2, position = "jitter") +
geom_smooth(span = 5, method = lm) +
ggtitle(toString(var)) + xlab("") +
theme(legend.position="none") +
scale_x_continuous(limits = c(x_start, x_lim)) +
scale_y_continuous(limits = c(0, 1750000),
breaks = seq(0, 1750000, 250000))
}
```
```{r warning=FALSE}
# room-related variables
BATHRM <- house.price_scatter("BATHRM", 0, 8)
HF_BATHRM <- house.price_scatter("HF_BATHRM", 0, 4)
BEDRM <- house.price_scatter("BEDRM", 0, 8)
ROOMS <- house.price_scatter("ROOMS", 0, 15)
grid.arrange(BATHRM, HF_BATHRM, BEDRM, ROOMS,
ncol = 2, nrow = 2)
```
It seems like there is a strong pattern going up for them. It is very natural that a house price increases as it has more rooms. One surprising thing is that number of bathrooms has a sharper slope than number of bedrooms. Indeed, it is often seen that a 2-bedroom/2-bathroom house is more expensive than a 3-bedroom/1-bathroom house. Another thing to note is that it seems like prices become variable more widely as there are more rooms. For example, the price range of 3 to 4 rooms is much smaller than the price range of 7 to 8 rooms. Since room variables will play a significant role, [heteroscedasticity](http://www.statsmakemecry.com/smmctheblog/confusing-stats-terms-explained-heteroscedasticity-heteroske.html) is expected to happen, and this will be cared later.
```{r warning=FALSE, message=FALSE}
# year-related and quality variables
AYB <- house.price_scatter("AYB", 1930, 2020)
EYB <- house.price_scatter("EYB", 1930, 2020)
STORIES <- house.price_scatter("STORIES", 0, 10)
FIREPLACES <- house.price_scatter("FIREPLACES", 0, 5)
grid.arrange(AYB, EYB, STORIES, FIREPLACES,
ncol = 2, nrow = 2)
```
Variables related with years show a rough pattern. "EYB" has some slope which is affected by data after 2000. These are what must be heavily affected by contingent market trends, not by typical features. Although "STORIES" has a good sharp slope, they are mostly 2 to 3 stories with large variances. "FIREPLACE" has a decent movement within 0 to 1 range.
```{r warning=FALSE, message=FALSE}
# GBA showing growing variances
ggplot(house.price, aes(x = GBA, y = PRICE)) +
geom_point(color = "darkgreen",
shape = 1, alpha = 0.3, size = 0.5) +
geom_smooth() +
ggtitle("GBA") + xlab("") +
scale_x_continuous(limits = c(0, 6000),
breaks = seq(0, 6000, 1000)) +
scale_y_continuous(limits = c(0, 1750000),
breaks = seq(0, 1750000, 250000))
```
"GBA" appeals the heteroscedasticity issue very clearly. As a house gets bigger, its price could be in any range. This can be understood that houses with 1-bedroom, for instance, should be in a small price range because a new house in downtown is still just 1-bedroom. But houses with 4-bedroom could be over million dollars or an old out-of-maintenance house on auction.
\
### __Transformation: *Box-Cox*__
Now that the variance issue is found, it is time to fix it. For the last check, Breusch-Pagan test can be conducted on a linear model. The test runs with null hypothesis that the variances are uniform over values. As seen below, the results are enough to reject the null hypothesis and claim the variance issues.
```{r}
# linear model with numerical variables
Numeric.lm <- lm(PRICE ~ BATHRM + HF_BATHRM + BEDRM + ROOMS +
AYB + EYB + STORIES + FIREPLACES + GBA,
data = house.price)
# constant variance tests
print(bptest(Numeric.lm))
print(ncvTest(Numeric.lm))
```
Box-Cox suggests how to transform the data. The lambda is the core concept of the method that would be iterated between a certain range. Lambda nearing 0.5 indicates square root transformation for prices.
```{r}
# box-cox
bc <- boxCox(Numeric.lm)
lambda <- bc$x[which.max(bc$y)]
cat("Lambda for Transformation:", lambda)
```
```{r}
# transformation to sqrt
house.price$PRICE_TRANS <- sqrt(house.price$PRICE)
house.price <- select(house.price, -PRICE)
# price-sqrt histogram
ggplot(house.price, aes(x = PRICE_TRANS)) +
geom_histogram(aes(y = ..density..),
binwidth = 20, fill = "blue", alpha = 0.25) +
geom_density(fill = "pink", alpha = 0.25) +
labs(title = "Washington D.C House Price SQRT") +
theme(plot.title = element_text(hjust = 0.5)) +
geom_vline(aes(xintercept = mean(PRICE_TRANS), color = "mean"), size = 0.5) +
geom_vline(aes(xintercept = median(PRICE_TRANS), color = "median"),
linetype = "dashed", size = 0.5) +
scale_x_continuous(limits = c(0, 1500)) +
scale_color_manual(name = "", values = c(mean = "red", median = "blue")) +
stat_function(fun = dnorm, args = list(
mean = mean(house.price$PRICE_TRANS), sd = sd(house.price$PRICE_TRANS)),
linetype = "dashed")
```
After transformation, the distribution becomes better than the initial one. The dotted line is the normal curve for comparison. Despite being not perfectly matched, at least the skewness is fixed as the mean median are aligned.
```{r}
# final correlations with PRICE_TRANS
X_num <- names(select_if(house.price, is.numeric))
X_num <- X_num[X_num != "PRICE_TRANS"]
cat("[Price SQRT vs. X Variables Correlation]\n")
kable(cor(house.price[, X_num], house.price$PRICE_TRANS)) %>%
kable_styling(bootstrap_options = c("striped", "hover"), full_width = F)
```
Although the correlations decreased a bit, it is more important to abide by the assumption of variance and normality, thus, the transformation is necessary.
------
\
### __Box Plot__
Box plot is helpful to see if a categorical variable can divide dependent variables into groups. The data set has several categorical variables from building materials, functions, quality, to area. Some has too many different values available, some has unbalanced values, and another has its own issue. The following is general remedy for these kind of problems, some of which will be used for the project.
* No effect: Drop the category
* Too many values: Group them by similarity
* Unbalanced values: Keep the major value and merge all the minor values
* Specific order: Label values with numbers
* More of numerical type: Replace each value with a certain number like median
```{r}
categorical_cols <- names(select_if(house.price, is.factor))
print(categorical_cols)
```
```{r}
# number of unique values in each categorical variable
str(apply(house.price[, categorical_cols], 2, function(x) unique(x)))
```
```{r}
# fix '0' values in AC
cat("Number of '0's in AC:", pull(count(house.price[house.price$AC == '0', ])))
house.price[house.price$AC == '0', 'AC'] <- "N"
```
```{r}
# define a function for repeated uses
house.price_boxplot <- function(var, x.angle) {
ggplot(house.price, aes_string(var, "PRICE_TRANS", fill = var)) +
geom_boxplot(outlier.colour = "black",
outlier.shape = 16,
outlier.size = 1) +
geom_hline(aes(yintercept = median(PRICE_TRANS)),
color = 'gray', linetype = "dashed", size = 0.5) +
ggtitle(toString(var)) + xlab("") +
theme(legend.position="none",
axis.text.x = element_text(angle = x.angle, hjust = 1))
}
```
```{r}
# air conditioning, remodeld, qualified, heating, ward
lay <- rbind(c(1, 2, 3),
c(4, 4, 5))
grid.arrange(grobs = list(house.price_boxplot("AC", 0),
house.price_boxplot("RMDL", 0),
house.price_boxplot("QUALIFIED", 0),
house.price_boxplot("HEAT", 90),
house.price_boxplot("WARD", 90)),
heights = c(0.4, 0.6),
layout_matrix = lay)
```
"AC", "RMDL", and "QUALIFIED" have two classes. Among them, "QUALIFIED" seems to do a good job to divide expensive houses and cheap houses, of course, qualified houses value more and the same goes for remodeled houses with air conditioner equipped. "HEAT" is a bit confusing in that most of values are around the median price (dotted line). Though there are some values "Air Exchng" or "Evp Cool", their counts do not seem to be enough to make a contribution. For that reason, "HEAT" should rather be dropped. "WARD" looks like it will be very beneficial as each ward are ranged properly and located differently. Some issue here is that they have many outliers and ward 7 and 8 are almost identical. Nonetheless, this looks good enough useful when working with numerical variables.
```{r}
# drop HEAT
print(summary(house.price$HEAT))
house.price <- select(house.price, -HEAT)
```
```{r}
grid.arrange(house.price_boxplot("STYLE", 90),
house.price_boxplot("STRUCT", 90),
ncol = 2)
```
The values in "STYLE" are basically stories. Since "STORIES" variable already exists, this can be removed. Also, almost all of them (9557 count) are "2 Story" making it severely unbalanced. "STRUCT" looks like it could do some job to find cheap houses by the value "Multi" or "Semi-Detached".
```{r}
# style
print(summary(house.price$STYLE))
house.price <- select(house.price, -STYLE)
```
```{r}
# struct
print(summary(house.price$STRUCT))
house.price$STRUCT <- ifelse(house.price$STRUCT %in%
c("Multi", "Semi-Detached", "Town Inside"),
"Cheap", "Expensive")
```
```{r}
# condition, grade
grid.arrange(house.price_boxplot("CNDTN", 90),
house.price_boxplot("GRADE", 90),
ncol = 2)
```
"CDNTN" and "GRADE" show a similar pattern that average or fair houses are cheaper. These houses are under the median line and the rest are above very clearly. So, these several values can be grouped into two. This way, the values in "CDNTN" become balanced too. The values in "GRADE", however, are not quite two group dividable as there are unignorable discrepancies among similair values. This will be further investigated.
```{r}
# grade
print(summary(house.price$GRADE))
```
```{r}
# condition
print(summary(house.price$CNDTN))
house.price$CNDTN <- ifelse(house.price$CNDTN %in%
c("Average", "Fair", "Poor"), "Bad", "Good")
```
```{r}
# exwall
house.price_boxplot("EXTWALL", 90)
```
There are just too many values in "EXTWALL", thus, it is inevitable to group them. With some extra research about exterior materials, they can be grouped into "Premium" or "Normal". Generally, [Brick](http://www.massrealty.com/articles/brick-homes-vs-wood-homes) and [Stucco](https://en.wikipedia.org/wiki/Stucco) are considered as relatively better materials.
```{r}
# exterior wall
print(summary(house.price$EXTWALL))
house.price$EXTWALL <- ifelse(house.price$EXTWALL %in%
c("Brick Veneer", "Brick/Stone", "Brick/Stucco", "Hardboard",
"Stone", "Stone/Stucco", "Stucco Block", "Wood Siding"),
"Premium", "Normal")
```
```{r}
# interior wall
grid.arrange(house.price_boxplot("INTWALL", 90),
house.price_boxplot("ROOF", 90),
ncol = 2)
```
"INTWALL" might have a chance to be grouped, but it should be dropped because of unbalanced issue. There are 9466 houses with "Hardwood" interior type, which are almost entire cases. As for "ROOF", there are four types that are obviously over the median line, and these can go together.
```{r}
# drop Interior
print(summary(house.price$INTWALL))
house.price <- select(house.price, -INTWALL)
```
```{r}
# Roof: cheap vs. expensive
print(summary(house.price$ROOF))
house.price$ROOF <- ifelse(house.price$ROOF %in%
c("Clay Tile", "Metal- Cpr", "Neopren", "Slate"),
"Expensive", "Cheap")
```
\
### Facet Plot
Facet plot usually needs two numerical variables and a categorical variable. While "CDNTN" are now divided into two groups, it wouldn't be a good idea to do the same grouping to "GRADE" as they are similar attributes. This is where facet plot would be considered to see if "GRADE" could be converted into a numerical variable.
```{r warning=FALSE, message=FALSE}
# facet points by grade
ggplot(house.price[!house.price$GRADE %in%
c("Exceptional-A", "Exceptional-B", "Fair Quality"), ],
aes(GBA, PRICE_TRANS, color = GRADE)) +
geom_point(alpha = 0.05, shape = 1) +
geom_smooth(method = lm) +
facet_wrap(~GRADE) +
scale_x_continuous(limits = c(0, 4000),
breaks = seq(0, 4000, 2000))
```
```{r}
# median price by grade
GRADE_median <- house.price %>%
group_by(GRADE) %>%
summarise(MEDIAN_PRICE = median(PRICE_TRANS)) %>%
arrange(MEDIAN_PRICE)
GRADE_median$MEDIAN_PRICE <- round(GRADE_median$MEDIAN_PRICE, digits = 2)
kable(GRADE_median) %>%
kable_styling(bootstrap_options = c("striped", "hover"), full_width = F)
```
```{r}
# new variable median price by grade
house.price$GRADE_MEDIAN_PRICE <- lapply(house.price$GRADE, function(x)
GRADE_median[match(x, GRADE_median$GRADE), "MEDIAN_PRICE"][[1]])
house.price$GRADE_MEDIAN_PRICE <- as.numeric(house.price$GRADE_MEDIAN_PRICE)
house.price <- select(house.price, -GRADE)
```
As seen in the graph and the table, prices are distinctively different by grade. Therefore, these median prices grade by grade would replace the categorical variable "GRADE" rather than grouping them as reduced categories.
------
\
## Diagnostic Analysis
Recalling the goal of this project, evaluating linear models is crucial to building the best model. Diagnostic analysis is the step to do that in various ways such as *Q-Q plot*, *Residual vs. Fitted plot*, *Cook's distance*, and so on. These techniques are designed to test the assumptions like constant variance, normality of residuals, and random error. The first step for this is building a linear model that will work as baseline and be modified later.
$$Cook's Distance (D_i)=\frac{(y_i-\hat{y}_i)^2}{p\times MSE}[\frac{h_ii}{(1-h_ii)^2}]$$
```{r}
house.lm <- lm(PRICE_TRANS ~ ., data = house.price)
summary(house.lm)
```
There are a few variables that are not significant. "ROOMS" must be due to collinearity, and the rest are seemingly because they are not useful. Interestingly enough, "Ward 6" has already been problematic on the map earlier.
\
### __Diagnostic Plots and Influential Points__
With the baseline model, diagnostic plots can be drawn. Residual vs. Fitted plot should have the balancing line horizontally around zero. The ideal Q-Q plot follows the diagonal line. Cook's distance is used as limits which dots should not go over. It gives the indice of observations with problems so they can be excluded.
```{r}
plot(house.lm)
```
Generally speaking, Residual vs. Fitted plot look okay, but Q-Q plot doesn't look like it is on the normal line. From the shape, it could be light-tailed distribution. Residual vs. Leverage plot seems okay as dots are in the boundary of Cook's distance.
```{r}
# influential observations from Cook's Distance-Leverage
X_y <- names(select_if(house.price, is.numeric))
print(house.price[row.names(house.price) %in%
c(21, 18569, 24841, 97202), c(X_y)])
# drop influential observations
house.price <- house.price[!row.names(house.price) %in%
c(21, 18569, 24841, 97202), ]
```
There are four observations that the plots suggest looking closely. They are clearly strangely priced for their features. For example, index 21 has 14 rooms and it's only 182 unit price when its group median is 1067 unit price. The rest are also deviant from normal pattern.
```{r warning=FALSE, message=FALSE}
resid.plot <- function(model) {
ggplot(model, aes(.resid)) +
geom_density(fill = "pink", color = "red", alpha = 0.5) +
stat_function(fun = dnorm, args = list(
mean = mean(resid(model)), sd = sd(resid(model))),
linetype = "dashed", geom = "area", alpha = 0.2) +
xlab("Price Square Root Residuals") + ylab("Density") +
ggtitle("Residual Plot") +
scale_x_continuous(limits = c(-500, 500)) +
coord_fixed(ratio = 1e5)
}
resid.plot(house.lm)
```
From the warning of Q-Q plot, the distribution should be checked again. The gray in the back is normal distribution when the red area is the data distribution. It has a high peak with light tails both sides, suggesting that most of the observations are centered around the mean. As long as it is a symmetric bell shape, it won't cause a serious problem. However, it still looks like there are some extreme values from the long tails.
\
### __Remove Influential Points: *DFBETA*__
One of the methods to detect influential points is *DFBETA* which measures how much impact each observation has on a particular predictor. Recalling the correlation matrix, the top three will be investigated in order to remove influential points. The ultimate goal of this process is again finding the general trend, not biased by extreme values.
$$DFBETAS_i=\frac{\hat{\beta}_i-\hat{\beta}_{(i)j}}{s_{(i)}\sqrt{(X^TX)_{jj}}}$$
```{r}
# dfbeta
dfbeta.GBA <- dfbetaPlots(house.lm, terms = "GBA")
dfbeta.BATHRM <- dfbetaPlots(house.lm, terms = "BATHRM")
dfbeta.FIREPLACES <- dfbetaPlots(house.lm, terms = "FIREPLACES")
```
"GBA", "BATHRM", and "FIREPLACES" are the top three correlated variables. Because they will affect house prices the most, influential points should be detected based on them. The following calculates DFBETA's and threshold for the data set, and finds influential points with the three features.
$$Threshold=\frac{2}{\sqrt{n}}\ where\ n=observations$$
```{r}
# calculate dfbetas for selected variables
dfbeta.thrhd <- as.numeric(sqrt(4/count(house.price)))
dfbetas <- dfbetas(house.lm)
cat("DFBETA limit:", dfbeta.thrhd)
# influential point indice
dfbetas.infl.obs <- names(dfbetas[abs(dfbetas[ , "GBA"])
> dfbeta.thrhd, "GBA"])
dfbetas.infl.obs <- c(dfbetas.infl.obs,
names(dfbetas[abs(dfbetas[ , "BATHRM"])
> dfbeta.thrhd, "BATHRM"]))
dfbetas.infl.obs <- c(dfbetas.infl.obs,
names(dfbetas[abs(dfbetas[ , "FIREPLACES"])
> dfbeta.thrhd, "FIREPLACES"]))
# filter observations over dfbeta limit
house.price <- house.price[!row.names(house.price) %in% dfbetas.infl.obs, ]
```
```{r}
house.infl.lm <- lm(PRICE_TRANS ~., data = house.price)
summary(house.infl.lm)
```
After influential points are removed, R-squared is remarkably improved to 0.89 from 0.81. However, there are still insignificant features, so the last step becomes dropping them. These are what the model sees no difference whether they are in the model or not, therefore, R-squared should stay the same after losing them. Other than them, "ROOF" is relatively weaker than the rest. "WARD 6" is one of the values of "WARD", and this will be handled before building a predictive model.
```{r}
# drop insignificant regressors
house.price <- select(house.price, -c("ROOMS", "STORIES", "EXTWALL", "RMDL"))
write.csv(house.price, file = "DC_Properties_final.csv", row.names = F)
# regression with significant variables
house.final.lm <- lm(PRICE_TRANS ~ ., data = house.price)
summary(house.final.lm)
```
```{r warning=FALSE, message=FALSE}
grid.arrange(resid.plot(house.lm) + ggtitle("Original Residuals"),
resid.plot(house.infl.lm) + ggtitle("After Removing Influential Points"),
ncol = 2)
```
On the left is before eliminating the influential points and on the right is after that. The assumption of normal residuals is better met with the influential points removed, almost being identical with the normal distribution in gray.
------
\
## Best Model
Although there are many other measurements to select features, two representative are used for this project. One is Adjusted R-squared to measure the overall fit, and the other is AIC to see if adding another feature is helpful for modeling.
### __Adjusted R-Square__
Unlike R-square, Adjusted R-square takes number of parameters added into consideration. That is, it could be worse to add a new variable if the variable is not helpful enough to explain errors.
```{r}
# full linear model
rsq.lm <- lm(PRICE_TRANS ~ ., data = house.price)
# convert it to matrix
x <- model.matrix(rsq.lm)[, -1]
y <- house.price$PRICE_TRANS
# 5 best models by adjusted r-squared
adjr <- leaps(x, y, method = "adjr2")
maxadjr(adjr, 5)
```
The result of 5 best models indicates predictor numbers and adjusted r-squared for each model. While all the models perform the same (0.89), they have different set of predictors. Since a simpler model is bettern for the same performance, the 4th should be chosen with the least number of features.
```{r}
print(colnames(x[, c(11, 17)]))
x <- x[, -c(11, 17)]
```
The two features the 4th model excludes happen to be what already had issues on significance. Now that it is clear they do not need to be in the model, "ROOF" and "WARD 6" can be let go. Nevertheless, it is safe to see feature selection at a different angle.
\
### __AIC__
Basically, AIC is designed to decide whether to include an additinal variable or not. The lower the score is, the better the model is. The process runs through features and calculate AIC and continue until AIC stops decreasing. The following starts with a null model and add a feature one by one to see how AIC changes.
```{r}
# null model for variable addition
null.lm <- lm(PRICE_TRANS ~ 1, data = house.price)
full.lm <- lm(PRICE_TRANS ~ ., data = house.price)
# stepwise selection
step(null.lm, scope = list(lower = null.lm, upper = full.lm),
direction = "both")
```
As the result shows, the full model has the lowest AIC, meaning that adding all the features is the best. However, we know "ROOF" and "WARD 6" would not make any difference, thus, they can be removed. Still, the feature selection is in accordance with the Adjusted R-squared and AIC results.
------
\
## Prediction
A predictive model can be used to predict a house price for one purpose, and to see if a house value is evaluated properly for another. The principle steps start with splitting data set, train one part, and test the model against the rest. Since training set and testing set could be biased, partitioning should loop through the data set (Cross Validation).
```{r}
# data frame for prediction
price.df <- data.frame(x, y)
# hold 30% for validation
train.indice <- createDataPartition(price.df$y, p = 0.7, list = F)
training.set <- price.df[train.indice, ]
testing.set <- price.df[-train.indice, ]
# cross validation
train.control <- trainControl(method = "repeatedcv", number = 5, repeats = 3)
lm.model <- train(y ~ ., data = price.df, trControl = train.control, method = "lm")
summary(lm.model)
```
The result is not very different from the initial modeling, claiming that the prediction model performs well. All the variables are now significant, and Adjusted R-squared is very slightly lower than R-squared. Putting them altogether, there is no more feature to be eliminated. One last step is to see how well this model works.
\
### __Evaluation: *RMSE*__
Root Mean Square Error is widely used to evaluate a regression model. This is the average of differences between true values and predicted values of each observation with sign ignored. Note that the RMSE 68.14 is from unit price transformed as square root.
```{r}
price_pred <- predict(lm.model, newdata = testing.set)
rmse <- RMSE(testing.set$y, price_pred)
cat("Root Mean Square Error (RMSE):", rmse)
```
```{r}
True.Price <- (testing.set$y)^2
Predicted.Price <- price_pred^2
pred.df <- data.frame(True.Price, Predicted.Price)
pred.df$Diff <- pred.df$True.Price - pred.df$Predicted.Price
kable(head(pred.df, n = 20)) %>%
kable_styling() %>%
scroll_box(width = "500px", height = "500px")
```
```{r}
cat("True Price vs. Predicted Price\n",
"Mean Error: $", mean(abs(pred.df$Diff)), '\n',
"Median Error: $", median(abs(pred.df$Diff)))
```
Median RMSE in dollar is \$63,687, which implies that the model usually predicts a price for a given house with that much difference. For example, a \$1,000,000 house could be predicted as much as \$1,063,000 or \$934,000. The following plot is the residual plot expressed in dollar amount. There seems no specific pattern among residuals. Note that the reason the left side is darker is simply due to more observations in that range.
```{r}
ggplot(pred.df, aes(Predicted.Price, Diff)) +
geom_point(alpha = 0.25) +
geom_hline(yintercept = 0, color = 'red') +
geom_hline(yintercept = c(-250000, 250000), color = 'blue', linetype = "dashed") +
ggtitle("Residual Plot")
```
```{r}
mean_price <- pred.df[pred.df$True.Price > 650000 &
pred.df$True.Price < 660000, "Predicted.Price"]
median_price <- pred.df[pred.df$True.Price > 615000 &
pred.df$True.Price < 625000, "Predicted.Price"]
cat("Mean and Median Price Prediction\n",
"Mean (650K~660K): $", mean(abs(mean_price)), '\n',
"Median (615K~625K): $", mean(abs(median_price)))
```
Prediction for a given value can be helpful to feel how this model works. The mean and median house price is \$660,583 and \$619,900, respectively, and the predictions for these are \$684,259 and \$593,353. There is about \$20K gap between true statistics and the predictions. This is much lower than overall the RMSE, but more meaningful because most of the observations are around mean and median, so will be new data.
```{r}
ggplot(pred.df) +
geom_histogram(aes(True.Price, color="True"),
binwidth = 20000, fill = "white", alpha = 0.25) +
geom_histogram(aes(Predicted.Price, color="Predicted"),
binwidth = 20000, fill = "white", alpha = 0.25) +
ggtitle("True Price vs. Predicted Price")
```
Finally, histograms of true values and predicted values can be drawn together to see how similar they are over the price ranges. During the analysis, we found out that the distributions had issues on tails. The same occurs here where most of errors occur on tails. However, this shouldn't be a serious problem because the model is focused on majority of houses, not overly expensive or cheap houses.
------
\
## Conclusion
Beginning with about 50 variables available, we successfully selected 16 features to predict house prices. *Adjusted R-square* and *AIC* were used as the criteria for feature selection. Some of them are as follows.
* Numerical: Number of bathrooms, House size in sqft, Year built
* Categorical: Ward, Condition, Struct
House prices become more expensive as number of rooms and size increase. Area such as ward and house condition critically affect prices but building materials do not as much. The following diagnostic plots are what we used for checking the assumptions.
* _Resisual vs. Fitted Plot_: Constant variance
* _Q-Q Plot_: Normality of residuals
After the diagnosis, *DFBETAS* was used for filtering influential points without which the linear model was improved to meet the assumptions. Through all the processes, we could finally build a predictive model. A training set and a validation set were separated to train the model only with the training set and to test it against the testing set. By doing so, the model could be tested with data it didn't see for training.
The performance of the model was measured as 0.89 R-squared. More realistically, the median error was \$63,025 and less than \$20,000 for average houses priced between \$650,000 and \$660,000. Considering how expensive houses in Washington D.C. are, error \$20,000 or less is quite impressive even if there could be further improvement if with more variables added.
------
\
## References
__Theory__
* [VIF](http://www.scriptwarp.com/warppls/pubs/Kock_Lynn_2012.pdf)
* [Outlier](https://www.r-bloggers.com/outlier-detection-and-treatment-with-r/)
* [DFBETAS](https://www.sfu.ca/sasdoc/sashtml/stat/chap55/sect38.htm)
* [Diagnosis & Variable Selection](https://www.statmethods.net/stats/regression.html)
* [Q-Q Plot Interpretation](https://stats.stackexchange.com/questions/101274/how-to-interpret-a-qq-plot)
* [Diagnostic Plots](https://data.library.virginia.edu/diagnostic-plots/)
* [Cook's Distance](https://onlinecourses.science.psu.edu/stat501/node/340/)
__R Syntax__
* [Mapping](http://eriqande.github.io/rep-res-web/lectures/making-maps-with-R.html#)
* [Ggmap overview](https://github.com/dkahle/ggmap), [Ggmap example](https://rdrr.io/cran/ggmap/man/get_stamenmap.html)
* [Corrplot](https://cran.r-project.org/web/packages/corrplot/vignettes/corrplot-intro.html)
* [Linear Regression Example](http://r-statistics.co/Linear-Regression.html)
* [Leaps](https://rdrr.io/cran/leaps/man/regsubsets.html)