Introduction

This data set was presented first in Symons, Grimson, and Yuan (1983), analysed with reference to the spatial nature of the data in Cressie and Read (1985), expanded in Cressie and Chan (1989), and used in detail in Cressie (1991). It is for the 100 counties of North Carolina, and includes counts of numbers of live births (also non-white live births) and numbers of sudden infant deaths, for the July 1, 1974 to June 30, 1978 and July 1, 1979 to June 30, 1984 periods. In Cressie and Read (1985), a listing of county neighbours based on shared boundaries (contiguity) is given, and in Cressie and Chan (1989), and in Cressie (1991, 386–89), a different listing based on the criterion of distance between county seats, with a cutoff at 30 miles. The county seat location coordinates are given in miles in a local (unknown) coordinate reference system. The data are also used to exemplify a range of functions in the spatial statistics module user’s manual (Kaluzny et al. 1996).

Getting the data into R

We will be using the spdep and spreg packages, here version: spdep, version 1.3-6, 2024-08-31, the sf package and the tmap package. The data from the sources referred to above is documented in the help page for the nc.sids data set in spData. The actual data, included in a shapefile of the county boundaries for North Carolina were made available in the maptools package 1. These data are known to be geographical coordinates (longitude-latitude in decimal degrees) and are assumed to use the NAD27 datum. The ESRI Shapefile is deprecated, and was replaced here by a GeoPackage, written from reading the original files in spData 2.3.1:

library(spdep)
nc <- st_read(system.file("shapes/sids.gpkg", package="spData")[1], quiet=TRUE)
#st_crs(nc) <- "EPSG:4267"
row.names(nc) <- as.character(nc$FIPSNO)

The shapefile format presupposed that you had three files with extensions .shp, .shx, and .dbf, where the first contains the geometry data, the second the spatial index, and the third the attribute data. They were required to have the same name apart from the extension, and were read here using sf::st_read() into the sf object nc; the class is defined in sf. The centroids of the largest polygon in each county are available using the st_centroid method from sf as an sfc POINT object, and can be used to place labels after the extraction of the coordinate matrix:

sf_use_s2(TRUE)
plot(st_geometry(nc), axes=TRUE)
text(st_coordinates(st_centroid(st_geometry(nc), of_largest_polygon=TRUE)), label=nc$FIPSNO, cex=0.5)

We can examine the names of the columns of the data frame to see what it contains — in fact some of the same columns that we will be examining below, and some others which will be useful in cleaning the data set.

names(nc)
##  [1] "CNTY_ID"   "AREA"      "PERIMETER" "CNTY_"     "NAME"      "FIPS"      "FIPSNO"   
##  [8] "CRESS_ID"  "BIR74"     "SID74"     "NWBIR74"   "BIR79"     "SID79"     "NWBIR79"  
## [15] "east"      "north"     "x"         "y"         "lon"       "lat"       "L_id"     
## [22] "M_id"      "geom"
summary(nc)
##     CNTY_ID          AREA          PERIMETER         CNTY_          NAME          
##  Min.   :1825   Min.   :0.0420   Min.   :0.999   Min.   :1825   Length:100        
##  1st Qu.:1902   1st Qu.:0.0910   1st Qu.:1.324   1st Qu.:1902   Class :character  
##  Median :1982   Median :0.1205   Median :1.609   Median :1982   Mode  :character  
##  Mean   :1986   Mean   :0.1263   Mean   :1.673   Mean   :1986                     
##  3rd Qu.:2067   3rd Qu.:0.1542   3rd Qu.:1.859   3rd Qu.:2067                     
##  Max.   :2241   Max.   :0.2410   Max.   :3.640   Max.   :2241                     
##      FIPS               FIPSNO         CRESS_ID          BIR74           SID74      
##  Length:100         Min.   :37001   Min.   :  1.00   Min.   :  248   Min.   : 0.00  
##  Class :character   1st Qu.:37050   1st Qu.: 25.75   1st Qu.: 1077   1st Qu.: 2.00  
##  Mode  :character   Median :37100   Median : 50.50   Median : 2180   Median : 4.00  
##                     Mean   :37100   Mean   : 50.50   Mean   : 3300   Mean   : 6.67  
##                     3rd Qu.:37150   3rd Qu.: 75.25   3rd Qu.: 3936   3rd Qu.: 8.25  
##                     Max.   :37199   Max.   :100.00   Max.   :21588   Max.   :44.00  
##     NWBIR74           BIR79           SID79          NWBIR79             east      
##  Min.   :   1.0   Min.   :  319   Min.   : 0.00   Min.   :    3.0   Min.   : 19.0  
##  1st Qu.: 190.0   1st Qu.: 1336   1st Qu.: 2.00   1st Qu.:  250.5   1st Qu.:178.8  
##  Median : 697.5   Median : 2636   Median : 5.00   Median :  874.5   Median :285.0  
##  Mean   :1051.0   Mean   : 4224   Mean   : 8.36   Mean   : 1352.8   Mean   :271.3  
##  3rd Qu.:1168.5   3rd Qu.: 4889   3rd Qu.:10.25   3rd Qu.: 1406.8   3rd Qu.:361.2  
##  Max.   :8027.0   Max.   :30757   Max.   :57.00   Max.   :11631.0   Max.   :482.0  
##      north             x                 y             lon              lat       
##  Min.   :  6.0   Min.   :-328.04   Min.   :3757   Min.   :-84.08   Min.   :33.92  
##  1st Qu.: 97.0   1st Qu.: -60.55   1st Qu.:3920   1st Qu.:-81.20   1st Qu.:35.26  
##  Median :125.5   Median : 114.38   Median :3963   Median :-79.26   Median :35.68  
##  Mean   :122.1   Mean   :  91.46   Mean   :3953   Mean   :-79.51   Mean   :35.62  
##  3rd Qu.:151.5   3rd Qu.: 240.03   3rd Qu.:4000   3rd Qu.:-77.87   3rd Qu.:36.05  
##  Max.   :182.0   Max.   : 439.65   Max.   :4060   Max.   :-75.67   Max.   :36.52  
##       L_id           M_id                 geom    
##  Min.   :1.00   Min.   :1.00   MULTIPOLYGON :100  
##  1st Qu.:1.00   1st Qu.:2.00   epsg:4267    :  0  
##  Median :2.00   Median :3.00   +proj=long...:  0  
##  Mean   :2.12   Mean   :2.67                      
##  3rd Qu.:3.00   3rd Qu.:3.25                      
##  Max.   :4.00   Max.   :4.00

Let’s check the different versions of the data against each other - sf and spData have NC SIDS files, as does GeoDa Center in two forms:

library(sf)
nc_sf <- st_read(system.file("shape/nc.shp", package="sf"),
                 quiet=TRUE)
st_crs(nc_sf)
## Coordinate Reference System:
##   User input: NAD27 
##   wkt:
## GEOGCRS["NAD27",
##     DATUM["North American Datum 1927",
##         ELLIPSOID["Clarke 1866",6378206.4,294.978698213898,
##             LENGTHUNIT["metre",1]]],
##     PRIMEM["Greenwich",0,
##         ANGLEUNIT["degree",0.0174532925199433]],
##     CS[ellipsoidal,2],
##         AXIS["latitude",north,
##             ORDER[1],
##             ANGLEUNIT["degree",0.0174532925199433]],
##         AXIS["longitude",east,
##             ORDER[2],
##             ANGLEUNIT["degree",0.0174532925199433]],
##     ID["EPSG",4267]]
nc <- st_read(system.file("shapes/sids.shp",
                 package="spData"), quiet=TRUE)
st_crs(nc)
## Coordinate Reference System: NA

As the actual CRS is unknown, spData reports missing, although it may very well be +proj=longlat +datum=NAD27

st_crs(nc) <- "+proj=longlat +datum=NAD27"

Next, are the geometries the same? sf::st_equals returns a logical matrix, so we’ll check that the diagonal values are all TRUE, and that only those values are TRUE by summing and recalling that n is 100:

suppressWarnings(st_crs(nc_sf) <- st_crs(nc))
xx <- st_equals(nc, nc_sf, sparse=FALSE)
all(diag(xx)) && sum(xx) == 100L
## [1] TRUE

Next, let’s download the GeoDa files and repeat the comparisons:

td <- tempdir()
#download.file("https://geodacenter.github.io/data-and-lab//data/sids.zip", file.path(td, "sids.zip"), quiet=TRUE) 
# local copy (2020-10-22) as repository sometimes offline
file.copy(system.file("etc/misc/sids.zip", package="spdep"), td)
## [1] TRUE
unzip(file.path(td, "sids.zip"), c("sids/sids.dbf", "sids/sids.prj", "sids/sids.shp", "sids/sids.shx"), exdir=td)
sids_sf <- st_read(file.path(td, "sids/sids.shp"), quiet=TRUE)
#download.file("https://geodacenter.github.io/data-and-lab//data/sids2.zip", file.path(td, "sids2.zip"), quiet=TRUE)
file.copy(system.file("etc/misc/sids2.zip", package="spdep"), td)
## [1] TRUE
unzip(file.path(td, "sids2.zip"), c("sids2/sids2.dbf", "sids2/sids2.prj", "sids2/sids2.shp", "sids2/sids2.shx"), exdir=td)
sids2_sf <- st_read(file.path(td, "sids2/sids2.shp"), quiet=TRUE)
st_crs(sids_sf)
## Coordinate Reference System:
##   User input: WGS 84 
##   wkt:
## GEOGCRS["WGS 84",
##     DATUM["World Geodetic System 1984",
##         ELLIPSOID["WGS 84",6378137,298.257223563,
##             LENGTHUNIT["metre",1]]],
##     PRIMEM["Greenwich",0,
##         ANGLEUNIT["degree",0.0174532925199433]],
##     CS[ellipsoidal,2],
##         AXIS["latitude",north,
##             ORDER[1],
##             ANGLEUNIT["degree",0.0174532925199433]],
##         AXIS["longitude",east,
##             ORDER[2],
##             ANGLEUNIT["degree",0.0174532925199433]],
##     ID["EPSG",4326]]
st_crs(sids2_sf)
## Coordinate Reference System:
##   User input: WGS 84 
##   wkt:
## GEOGCRS["WGS 84",
##     DATUM["World Geodetic System 1984",
##         ELLIPSOID["WGS 84",6378137,298.257223563,
##             LENGTHUNIT["metre",1]]],
##     PRIMEM["Greenwich",0,
##         ANGLEUNIT["degree",0.0174532925199433]],
##     CS[ellipsoidal,2],
##         AXIS["latitude",north,
##             ORDER[1],
##             ANGLEUNIT["degree",0.0174532925199433]],
##         AXIS["longitude",east,
##             ORDER[2],
##             ANGLEUNIT["degree",0.0174532925199433]],
##     ID["EPSG",4326]]

It looks as though the external files are assuming WGS84/NAD83 for the datum, but also contain the same geometries.

suppressWarnings(st_crs(sids_sf) <- st_crs(nc_sf))
xx <- st_equals(sids_sf, nc_sf, sparse=FALSE)
all(diag(xx)) && sum(xx) == 100L
## [1] FALSE
suppressWarnings(st_crs(sids2_sf) <- st_crs(nc_sf))
xx <- st_equals(sids2_sf, nc_sf, sparse=FALSE)
all(diag(xx)) && sum(xx) == 100L
## [1] FALSE

Now for the contents of the files - sids2 also contains rates, while the file in spData contains the coordinates as given in Cressie (1991), and the parcels of contiguous counties on p. 554, and the aggregations used for median polishing.

all.equal(as.data.frame(nc_sf)[,1:14], as.data.frame(sids_sf)[,1:14])
##  [1] "Names: 12 string mismatches"                                  
##  [2] "Component 4: Modes: numeric, character"                       
##  [3] "Component 4: target is numeric, current is character"         
##  [4] "Component 5: 100 string mismatches"                           
##  [5] "Component 6: Modes: character, numeric"                       
##  [6] "Component 6: target is character, current is numeric"         
##  [7] "Component 7: Mean relative difference: 0.9986388"             
##  [8] "Component 8: Mean relative difference: 64.33901"              
##  [9] "Component 9: Mean relative difference: 0.9979786"             
## [10] "Component 10: Mean relative difference: 156.5427"             
## [11] "Component 11: Mean relative difference: 3.01968"              
## [12] "Component 12: Mean relative difference: 0.9980208"            
## [13] "Component 13: Mean relative difference: 160.8194"             
## [14] "Component 14: Modes: numeric, list"                           
## [15] "Component 14: Attributes: < target is NULL, current is list >"
## [16] "Component 14: target is numeric, current is sfc_MULTIPOLYGON"
all.equal(as.data.frame(nc_sf)[,1:14], as.data.frame(sids2_sf)[,1:14])
##  [1] "Names: 12 string mismatches"                         
##  [2] "Component 4: Modes: numeric, character"              
##  [3] "Component 4: target is numeric, current is character"
##  [4] "Component 5: 100 string mismatches"                  
##  [5] "Component 6: Modes: character, numeric"              
##  [6] "Component 6: target is character, current is numeric"
##  [7] "Component 7: Mean relative difference: 0.9986388"    
##  [8] "Component 8: Mean relative difference: 64.33901"     
##  [9] "Component 9: Mean relative difference: 0.9979786"    
## [10] "Component 10: Mean relative difference: 156.5427"    
## [11] "Component 11: Mean relative difference: 3.01968"     
## [12] "Component 12: Mean relative difference: 0.9980208"   
## [13] "Component 13: Mean relative difference: 160.8194"    
## [14] "Component 14: Mean relative difference: 0.9984879"

The spData data set has some columns reordered and a surprise:

all.equal(as.data.frame(nc_sf)[,1:14], as.data.frame(nc)[,c(2,3,4,1,5:14)])
## [1] "Component \"NWBIR74\": Mean relative difference: 0.04891304"

so a difference in NWBIR74:

which(!(nc_sf$NWBIR74 == nc$NWBIR74))
## [1] 21
c(nc$NWBIR74[21], nc_sf$NWBIR74[21])
## [1] 386 368

where spData follows Cressie (1991) and sf and Geoda follow Cressie and Chan (1989) for NWBIR74 in Chowan county.

We will now examine the data set reproduced from Cressie and collaborators, included in spData (formerly in spdep), and add the neighbour relationships used in Cressie and Chan (1989) to the background map as a graph shown in Figure \(\ref{plot-CC89.nb}\):

gal_file <- system.file("weights/ncCR85.gal", package="spData")[1]
ncCR85 <- read.gal(gal_file, region.id=nc$FIPSNO)
ncCR85
## Neighbour list object:
## Number of regions: 100 
## Number of nonzero links: 492 
## Percentage nonzero weights: 4.92 
## Average number of links: 4.92
gal_file <- system.file("weights/ncCC89.gal", package="spData")[1]
ncCC89 <- read.gal(gal_file, region.id=nc$FIPSNO)
ncCC89
## Neighbour list object:
## Number of regions: 100 
## Number of nonzero links: 394 
## Percentage nonzero weights: 3.94 
## Average number of links: 3.94 
## 2 regions with no links:
## 37055, 37095
## 3 disjoint connected subgraphs
plot(st_geometry(nc), border="grey")
plot(ncCC89, st_centroid(st_geometry(nc), of_largest_polygon), add=TRUE, col="blue")

Printing the neighbour object shows that it is a neighbour list object, with a very sparse structure — if displayed as a matrix, only 3.94% of cells would be filled. Objects of class nb contain a list as long as the number of counties; each component of the list is a vector with the index numbers of the neighbours of the county in question, so that the neighbours of the county with region.id of 37001 can be retreived by matching against the indices. More information can be obtained by using summary() on an nb object. Finally, we associate a vector of names with the neighbour list, through the row.names argument. The names should be unique, as with data frame row names.

r.id <- attr(ncCC89, "region.id")
ncCC89[[match("37001", r.id)]]
## [1] 11 26 29 30 48
r.id[ncCC89[[match("37001", r.id)]]]
## [1] 37033 37081 37135 37063 37037

The neighbour list object records neighbours by their order in relation to the list itself, so the neighbours list for the county with region.id “37001” are the seventeenth, nineteenth, thirty-second, forty-first and sixty-eighth in the list. We can retreive their codes by looking them up in the region.id attribute.

as.character(nc$NAME)[card(ncCC89) == 0]
## [1] "Dare" "Hyde"

We should also note that this neighbour criterion generates two counties with no neighbours, Dare and Hyde, whose county seats were more than 30 miles from their nearest neighbours. The card() function returns the cardinality of the neighbour set. We need to return to methods for handling no-neighbour objects later on. We will also show how new neighbours lists may be constructed in , and compare these with those from the literature.

Probability mapping

Rather than review functions for measuring and modelling spatial dependence in the spdep package, we will focus on probability mapping for disease rates data. Typically, we have counts of the incidence of some disease by spatial unit, associated with counts of populations at risk. The task is then to try to establish whether any spatial units seem to be characterised by higher or lower counts of cases than might have been expected in general terms (Bailey and Gatrell 1995).

An early approach by Choynowski (1959), described by Cressie and Read (1985) and Bailey and Gatrell (1995), assumes, given that the true rate for the spatial units is small, that as the population at risk increases to infinity, the spatial unit case counts are Poisson with mean value equal to the population at risk times the rate for the study area as a whole. Choynowski’s approach folds the two tails of the measured probabilities together, so that small values, for a chosen \(\alpha\), occur for spatial units with either unusually high or low rates. For this reason, the high and low counties are plotted separately below. Note that cut returns a factor labeled with cut intervals.

ch <- choynowski(nc$SID74, nc$BIR74)
nc$ch_pmap_low <- ifelse(ch$type, ch$pmap, NA)
nc$ch_pmap_high <- ifelse(!ch$type, ch$pmap, NA)
prbs <- c(0,.001,.01,.05,.1,1)
nc$high = cut(nc$ch_pmap_high, prbs)
nc$low = cut(nc$ch_pmap_low,prbs )
is_tmap <- FALSE
if (require(tmap, quietly=TRUE)) is_tmap <- TRUE
is_tmap
## [1] TRUE
library(tmap)
tmap4 <- packageVersion("tmap") >= "3.99"
if (tmap4) {
  tm_shape(nc) + tm_polygons(fill=c("low", "high"), fill.scale = tm_scale(values="brewer.set1"), fill.legend = tm_legend("p-values", frame=FALSE, item.r = 0), fill.free=FALSE, lwd=0.01) + tm_layout(panel.labels=c("low", "high"))
} else {
tm_shape(nc) + tm_fill(c("low", "high"), palette="Set1", title="p-values") +
  tm_facets(free.scales=FALSE) + tm_layout(panel.labels=c("low", "high"))
}

For more complicated thematic maps, it may be helpful to use ColorBrewer (https://colorbrewer2.org) colour palettes. Here we use palettes accessed through tmap, available in R in the RColorBrewer package.

While the choynowski() function only provides the probability map values required, the probmap() function returns raw (crude) rates, expected counts (assuming a constant rate across the study area), relative risks, and Poisson probability map values calculated using the standard cumulative distribution function ppois(). This does not fold the tails together, so that counties with lower observed counts than expected, based on population size, have values in the lower tail, and those with higher observed counts than expected have values in the upper tail, as we can see.

pmap <- probmap(nc$SID74, nc$BIR74)
nc$pmap <- pmap$pmap
brks <- c(0,0.001,0.01,0.025,0.05,0.95,0.975,0.99,0.999,1)
if (tmap4) {
  tm_shape(nc) + tm_polygons(fill="pmap", fill.scale = tm_scale(values="brewer.rd_bu", midpoint=0.5, breaks=brks), fill.legend = tm_legend(frame=FALSE, item.r = 0, position = tm_pos_out("right", "center")), lwd=0.01) + tm_layout(component.autoscale=FALSE)
} else {
tm_shape(nc) + tm_fill("pmap", breaks=brks, midpoint=0.5, palette="RdBu") + tm_layout(legend.outside=TRUE)
}

Marilia Carvalho (personal communication) and Virgilio Gómez Rubio (Gómez-Rubio, Ferrándiz-Ferragud, and López-Quílez 2005) have pointed to the unusual shape of the distribution of the Poisson probability values (histogram below), repeating the doubts about probability mapping voiced by Cressie (1991, 392): “an extreme value \(\ldots\) may be more due to its lack of fit to the Poisson model than to its deviation from the constant rate assumption”. There are many more high values than one would have expected, suggesting perhaps overdispersion, that is that the ratio of the variance and mean is larger than unity.

hist(nc$pmap, main="")