These days I run a fair bit of spatial analysis. And there are three problems that regularly come up: Getting data on compatible geographies Ecological fallacy Spatial autocorrelation None of these problems is insurmountable, but they are all annoying to various degrees. Often I might ignore them on my first analysis run, but these problems need to be dealt with sooner or later. Which can eat up significant amounts of time.
Canada’s metropolitan areas are growing, which means we need to add housing. But adding housing often faces stiff oppositions. There are many reasons people don’t like to add housing, this post is trying to look at one particular one. That adding housing causes displacement of the low-income population. Adding new housing to a neighbourhood has two opposing effects. The gentrification effect starts from the observation that new housing is more expensive than old housing (all else being equal).
Vancouver had elections on Saturday, today Toronto had their elections. And as opposed to Vancouver, Toronto has wards. Which makes things more fun, as we can look at census data for each ward to understand how people voted in the ward. We ran a very similar type of analysis the other day for Vancouver, so this is an easy add. The Toronto Open Data catalogue has data for the ward boundaries and a custom tab with census data.
The other day I saw a link to NASA active fire data fly by on Twitter. It’s a satellite-derived world wide dataset at 375m resolution, where one (or several) polar orbiting satellites scan earth in the infrared band from which fire and fire intensity is computed. Redding, CA With the Redding fire in the news I decided to take the data for a test drive. And also try out the gganimate package to watch the fire evolve over time.
The other day I was catching a bus home later at night, which made me acutely aware that I should not take the frequent daytime transit in Vancouver for granted. On the ride home I decided to dig into this and grab some transit data. We have played with transit data before, but since this was going to be the second time it was high time for a quick R package to standardize our efforts and simplify things for the next time around.