Top 5 Recent Articles
- Biography (1)
- Blog (41)
- Changes (1)
- Customers (1)
- Data Models (1)
- Education (2)
- General Software (21)
- Georaptor Blog (5)
- Image Catalog (2)
- Licensing (1)
- ManifoldGIS (3)
- MySQL Blog (4)
- MySQL Spatial (3)
- Networking and Routing (including Optimization) (3)
- Oracle Spatial (171)
- Philosophy (1)
- PostGIS (30)
- Press Releases (1)
- Source code (24)
- Space Curves (1)
- Spatial DB comparison (1)
- SQL (1)
- SQL Server Blog (58)
- SQL Server Spatial (General) (15)
- SQL Server Spatial (LRS) (37)
- Stored Procedure (2)
- Training (1)
- XML (5)
Choosing a Tile Size
The most critical property of an image catalog that needs to be defined on its creation (using the Catalog Registry Tool) is the size of an individual tile.
The scientific basis for what I am about to say was an published paper by Dr Peter R. Lamb (CSIRO Division of Information Technology) entitled: Tiling Very Large Rasters. This paper was published in the _Procedings of the 6th International Symposium on Spatial Data Handling, Edinburgh, Sept 1994_. (The paper is not available on the web: you have to use a library to get a copy of the paper!)
I am going to do a Camtasia video on how to choose a tile size based on this paper with some practical examples. But here is the gist of what it will contains.
Determine the average display window of the GIS clients that access the image catalog and divide the window by 4.
For example, if the dominant visualisation application using an Image Catalog (or Oracle’s excellent GeoRaster technology) is a web application with an output image size of, say, 800 x 600 pixels that all image catalogs should be tiled at 200 x 150 pixels.
Pixel size, because it is a normalised measure it makes computing the tile size of an image catalog easy. However, most users of fat-client GIS applications don’t know the size of the display area in pixels.
To “map” this to a projected distance one would multiply this by the pixel size. So, large scale orthophotography at 1.5m/pixel the tile size expressed in meters would be: 300m x 225m; large scale cartographic data (eg 1:25,000) @ 12m/pixel would be 2400 x 1800m; 1:500,000 at 100m/pixel would be 20,000m x 15,000m.
Effects of getting the tile size wrong.
Peter’s paper includes a number of graphs that show access time plotted against tile size.
What is common across all these curves is that setting a tile size that is larger than the optimum incurs less cost than setting the tile size to be smaller than optimum. The curves become much steeper, faster with too small a tile size running the danger of generating high access times (the curve’s asymptote or limit is being approached).
If our image catalog tile size was set to 200 x 150 pixels and a fat client was being run at high resolution 1280 x 1024 (the optimum tile size for this being 320 x 256 pixels) or above then we might start to get on to the steeper part of the curve increasing disk activity and slower access times. In this case it might be better advice to divide the average display window by, say, 3. However, statistics (or estimations of actual access) will help determine whether this is a good idea. So, if the fat-client is generating only 5% of the accesses to the image catalog then one would not bother with a large tile size. But if the fat-client access was generating 50% of the access then … but, as with so many things, it all depends on context!
Finally, I have not discussed the effect that database block sizes (such as are used in Oracle database) and logical block read settings might have on the choice of tile size. One might also want to consider disk sector size (eg FAT32 vs NTFS etc).
Finally, for disk/file based image catalogs there is the question of the number of physical files within a folder/directory. This does have a major impact on performance. But such a discussion is for another day…