I agree that this would be fun to do. All it requires is mixing the Toronto neighbourhoods with renal listings data, which I happen to have handy. So time to get working.
To do this we need to grab the Toronto neighbourhoods which can be found on Toronto’s open data website.
Rental Listings Data
With that in hand, we need to turn to rental data. The tweet is asking what property we can “get”, for that we should use turnover rents, so how much people would likely have to pay if they wanted to rent today. We turn to scrapes of a popular rental listings platform to answer that, while broadening our time frame to the past 3 months to make sure we get a decent sample. The per square foot price skews lower for higher bedroom listings, so we restrict ourselves to studios, 1 or 2 bedroom listings.
library(rental) listings <- get_listings("2017-08-06","2017-11-06",st_union(nbhds$geometry),beds=c("0","1","2"),filter = 'unfurnished') listings %>% as.data.frame %>% group_by(beds) %>% summarize(count=n()) %>% kable
Next we sort the listings into their neighbourhoods and compute some quantities of interest, including rent per square foot and the average size of the unit we can expect to rent for CA$1,500 per month.
nbhd_rpsf <- st_join(listings,nbhds) %>% as.data.frame %>% group_by(AREA_NAME) %>% summarize(count=n(), rpsf=mean(price/size, na.rm=TRUE)) %>% mutate(size_for_1500=round(1500/rpsf))
Rental Tree Map
Perfect, all that’s left to do is to make a tree map graph for the neighbourhoods, restricting ourselves to the ones with at least 100 listings.
To round things up we quickly map the data to see the geographic distribution, where we map all neighbourhoods with at least 10 data points.
As always, the R Notebook that generated this post is available on GitHub. Unfortunately in this case it requires access to non-public listings data, so the reproducibility of the post is limited to people with access to rental data of some sort that will have to substitute their own