Edmonton Property Assessment #3 – Geographical Mapping

In the last post, I looked at the most valued neighbourhoods in the city by average assessed value. We were coming across neighbourhoods like “Uplands”, “Decoteau”, and “Aster”… where are these neighbourhoods? I have no clue to be honest. I can’t even say I’ve heard of these neighbourhoods or anything resembling their names. “Decoteau”? I almost don’t even believe that’s in Edmonton…

gmaps Library

In my last project, librosa was a gem of a library that did wonders for audio signal processing. I’m hoping gmaps provides the same kind of breakthrough for me here because I found it using the same methodical approach that I used to find librosa… first result on google!

In all seriousness though, it looks like gmaps is able to embed a google maps interface right into Jupyter and allow you to plot on top of that. That sounds pretty enticing for now and definitely worth checking out for what I want to do here.

In [ ]:
# Enable plots in the notebook
%matplotlib inline
import matplotlib.pyplot as plt

# Seaborn makes our plots prettier
import seaborn
seaborn.set(style = 'ticks')

# Import jupyter widgets
from ipywidgets import widgets

import numpy as np
import pandas as pd
import os
import gmaps
import warnings
In [ ]:
# Initiate and configure gmaps with API Key
In [ ]:
fig = gmaps.figure()

Wow! That was easy. We straight up have an interactive google maps display within jupyter! Let’s check out some of gmaps’ capabilities using some of its out of the box datasets.

In [ ]:
# Import data sets
import gmaps.datasets
In [ ]:
# Map all sites of political violence in Africal between 1997 and 2015
locations = gmaps.datasets.load_dataset_as_df("acled_africa")
fig = gmaps.figure()
heatmap_layer = gmaps.heatmap_layer(locations)

So there’s the heatmap capabilities. It looks like you can also plot dots.

In [ ]:
# Plot all the starbucks locations in the UK
df = gmaps.datasets.load_dataset_as_df("starbucks_kfc_uk")

starbucks_df = df[df["chain_name"] == "starbucks"]
starbucks_df = starbucks_df[['latitude', 'longitude']]

starbucks_layer = gmaps.symbol_layer(
    starbucks_df, fill_color="green", stroke_color="green", scale=2
fig = gmaps.figure()

Awesome. Those are some great tools to start with. Let’s look back at the property assessment data now.

Edmonton Property Assessment Data

In [ ]:
# Load data set
edm_data = pd.read_csv('../data/Property_Assessment_Data.csv')
In [ ]:
In [ ]:
# Replace dollar signs and cast to int
edm_data['Assessed Value'] = edm_data['Assessed Value'].str.replace('$', '').astype(int)
In [ ]:
# Filter for only residential buildings
edm_data_res = edm_data[edm_data['Assessment Class'] == 'Residential']
In [ ]:
edmonton_res_heatmap_fig = gmaps.figure()
edmonton_res_heatmap_layer = gmaps.heatmap_layer(edm_data_res[['Latitude', 'Longitude']])

Alright, so with a heatmap, we’re purely looking at density. There are a lot of units downtown and near Whyte, which totally make sense. Lot’s of condos and apartments. We are weighting line of data (each unit) equally here, so a condo with 50 units on one block will look 10x more dense as 5 large houses on one block. There is also quite a bit of density now along Edmonton South.

Another thing that I’m seeing right off the bat is that there seems to be data missing in Edmonton east and Edmonton NW. I have two theories:

  1. This is under another jurisdiction (unlikely)
  2. These are no “residential” units in these areas per se, and rather they are more industrial (remember I took out commercial from the data set)

That strip along Calgary Trail is blank as well, and I know for a fact that it’s basically all commercial properties there, so I’m inclined to think that largely it’s due to #2, but maybe the city just doesn’t have this data for another reason.

Let’s take a look at the top 50 communities mapped out.

In [ ]:
# Generate statistics per neighborhood
edm_data_neighbour_grouped = edm_data_res.groupby(['Neighbourhood', 'Assessment Class']).agg({
    'Assessed Value': [np.mean, np.size],
    'Latitude': [np.mean],
    'Longitude': [np.mean]
In [ ]:
# Show most valued neighbourhoods with greater than 20 units
most_valuable_50_neighbourhoods = edm_data_neighbour_grouped[edm_data_neighbour_grouped[('Assessed Value', 'size')] > 20].sort_values([('Assessed Value', 'mean')], ascending = False).head(50)
most_valuable_50_neighbourhoods.columns = most_valuable_50_neighbourhoods.columns.droplevel(-1)
In [ ]:
# Check results
In [ ]:
# Plot most highly valued 50 communities with at least 20 units
edm_top_50_layer = gmaps.symbol_layer(
    most_valuable_50_neighbourhoods[['Latitude', 'Longitude']], 
    fill_color = "green", 
    stroke_color = "green", 
    scale = 2,
    info_box_content = most_valuable_50_neighbourhoods['Neighbourhood'].tolist()
edm_top_50_fig = gmaps.figure()

I’m liking the ability to actually use fully fledged google maps inside jupyter. I can actually street view to places and check the houses out themselves.

From this map, we see a few themes for highly valued properties:

  • Outskirts of town
  • Along the river
  • Southwest Edmonton

Some of these places I know. I actually just went to take a walk around Crestwood with my parents the other day, and I can attest to those houses being super nice. Many houses up in the millions to drag that average up. Other places (especially the outskirts) I’ve never been to, and judging by street view, aren’t even that nice! I’m thinking maybe they are much larger plots of land and are valued more in that way.

I’m looking for more urban areas, so let’s filter even one step more and only look at communities with over 200 units.

In [ ]:
# Show most valued neighbourhoods with greater than 20 units
most_valuable_50_neighbourhoods_min_200_units = edm_data_neighbour_grouped[edm_data_neighbour_grouped[('Assessed Value', 'size')] > 200].sort_values([('Assessed Value', 'mean')], ascending = False).head(50)
most_valuable_50_neighbourhoods_min_200_units.columns = most_valuable_50_neighbourhoods_min_200_units.columns.droplevel(-1)
In [ ]:
# Plot most highly valued 50 communities with at least 20 units
edm_top_50_min_200_units_layer = gmaps.symbol_layer(
    most_valuable_50_neighbourhoods_min_200_units[['Latitude', 'Longitude']], 
    fill_color = "green", 
    stroke_color = "green", 
    scale = 2,
    info_box_content = most_valuable_50_neighbourhoods_min_200_units['Neighbourhood'].tolist()
edm_top_50_min_200_units_fig = gmaps.figure()

Got rid of a lot of the outskirts ones and, alas, even more pop up in Southwest Edmonton lol. I get it, LIVE IN THE SOUTHWEST. I guess it makes sense. Lot’s of new developments, the river flows right through, more and more grocery stores and commercial services are popping up… what’s not to love?


These maps are great for data exploration, but what if I want to built some type of regression model around this? My plots thus far have only given me a sense of location… where are some of the most expensive units? Sure, I’m mapping the most expensive communities, but even amongst these communities, I can’t quite tell which ones are the most valued. I know “The Uplands” is the most expensive, but outside of that, I can’t quite distinguish the others.

I’d like to

  1. Summarize lat, long, and assessment value information in some type of model for regression and automation
  2. Be able to visually get a sense of value in different regions of the city by average price

Let’s get it on the next post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s