Monday, October 31, 2016

Priory Navigation Maps: Part One

Introduction

Navigation is one of the most useful skills to have.  Today the most common way of using navigation is from a smartphone app or from a GPS.  There is not so much asking for directions anymore due to being lost. Creating a map for navigation has a few key elements that are needed to make sure it is accurate and readable.  Therefore creating maps needs to have a detailed constructed map.  This doesn't mean it needs to look fancy and have all the bells and whistles; this just means that the map needs to have all the correct information laid out to help aid in direction.  This means it needs to have a proper coordinate system to make sure the person reading the map will not be many meters to miles off from their actual location.  For this activity maps of the Priory, near Eau Claire, Wisconsin were created.

Methods

This week two navigation maps were created to guide us for next week through the forest.  The first was created using degrees minutes and seconds grid, whereas the other was created using a UTM grid style.

With most of the needed data supplied from Professor Joe Hupy, the maps were set up.  To get them set up each needed a little amount of tweaking.  Each map needed to have an essential list of elements that included a north arrow, scale bar, RF scale, legend, source, Title, Projection used, and a watermark.  The RF in these maps allows a little more information to understand what the measurements are on the map.  This will help to understand pacing and distance while walking through the woods.  For the UTM map the projection was NAD 1983 UTM Zone 15N Wisconsin Transverse Mercator. The other was put into WGS 1984 to accuretly represent the degree minutes and seconds.  The UTM map was also changed to have a grid marking every 50 meters.  The grid labels were also changed to make it more readable by making the first few numbers smaller because they do not change. The decimal degree grid was made using 5 second intervals.  The UTM navigation map is in Figure 1 and the Decimal degree map is in Figure 2.
Figure 1: UTM Map


Figure 2: Decimal Degrees
Discussion

Creating maps like these are crucial to navigating around unknown terrain.  They are important for a few reason.  First,  it shows detailed elevation of the region.  This is very helpful because it can help you avoid having to go up and down hill constantly.  Another reason is because there was a technique taught in the geology department at UWEC for finding your own location on a map.  This is based on triangulation.  To do this one would need to be able to pick out three points from the navigation map and then determine azimuth directions from all three points to where the person standing is.  This gives a triangle of intersection to give the location.  Therefore, these kind of maps are important to understand how to use. The projections used are important because they need to reflect the surface accurately.

The UTM coordinate system is broken up into roughly 10 zones throughout the United states.  This is important because it helps shape the zones into the shape of the earth.   The other coordinates system was chosen because WGS84 is common for GPS to use.  This will allow ease when navigating through the Priory location.

Conclusion

This excercise is important to understand how to accuretly create navigation maps to be alble to use in the field.  It also was important to teach how the different coordinates systems and projects affect the look of the map.  Even if it is very slight, it is important to have an accurate map because an easy day in the field can turn into a nightmare, and these maps will be great to use next week for our field experience with them.


Monday, October 24, 2016

Distance Azimuth Survey

Introduction



This weeks survey includes having to create survey plot that is going to be used to identify trees found within the area. This method is used when technology fails, and it is a good backup to use and understand how to use. To utilize this method for survey measurements of azimuth distance and GPS points will be taken.  The distance calculates how far the starting point is away from the tree and the azimuth calculates the distance off of 0 degrees the tree is. 





Study Area and Set Up

The survey area selected for the distance azimuth survey was located on Putnam trail, just east of Philips Hall on the University of Wisconsin - Eau Claire campus.  This area is generally a swamp region during the summer when the water table is at a higher elevation making it difficult to walk through the trees in the shallow spots.  It has a large population of many different species of trees along the trail, which is located at the bottom of the hill which is famous to the students and staff on campus.  This area was selected due to having a large amount of trees to take azimuth directions on. The study area can be seen in a map view located in Figure 1 showing the location of the three study sites.









Figure 1: These are the study areas selected due to the large tree population. In the Green box is study area 1, blue is study are 2, and the red box is study area 3.  North is in the upward direction.  

Once we got to the sites there was some confusion due to not having used any of the equipment before.  The equipment used was measuring wheel, measuring tape which converted into diameter (Figure 2), a compass with the ability to take an azimuth measurement (Figure 3), a laser distance finder that uses ultrasonic, and a distance finder that uses a laser (Figure 3). The class was broken up into three groups to each collect data in the three spots using a normalized method to make it easier to create a spreadsheet once back in the office. The attributes taken for each datapoint included:


  1. Longitude (x)
  2. Latitude (Y)
  3. Distance (meters)
  4. Azimuth (degree 
  5. Diameter (Breast Height)
  6. Tree Species
  7. Sample Area Number
Figure 2:  Sarah Ward using the tape measure that converts it into diameter on the fly. 
Figure 3: Jesse Friend and Kyle Roloff (myself) taken a distance measurement (Jesse) and an azimuth reading (Kyle). 
The attributes chosen are each needed to create a survey that uses azimuth data.  To use this data within ArcGIS there needs to be an X,Y point for each tree so the computer can determine based off of the azimuth and the distance where it is located within the survey grid.  Also getting the diameter could determine the age of the tree or another physical feature.  The type of tree could also be used in another application to show what kind of trees mostly populate the Putnam Dr trail.  The last point is just the survey number which helps keep straight which area that point belong too.

Methods

There are many steps that went into this survey which include:

Step one:  Locate the study area.  This was chosen by finding and area that has a large tree population and could easily be identifiable by google maps to make sure the accuracy of the project is close (Figure 1).

Step two: get a gps point of the starting point.  This means that all of the trees survey will be based off of this point therefore it is imperative to get an accurate GPS reading.
Step Three: Pick a tree to start the survey.

Step Four: Use an compass to get an azimuth by directing the compass at the tree.  Where the arrow   on the compass points gives the azimuth (Figure 3).

Step Five: Use a distance finder or the measuring wheel to get a distance from the original point to the tree being surveyed (Figure 3).

Step Six: Use information about trees to figure out what the species is.

Step Seven: Use the measuring tape that converts on the fly into diameter on the tree at breast height (Figure 2).

Step Eight: Record all the information down in a notebook to bring back to the office.

Step Nine: Use the Bearing Distance to Line command found in the data management tools in ArcToolbox. Before being able to use this the data needs to be imported and made into a points feature class.  This is done by creating a feature class and then right clicking on it to select import x,y coordinates.  Then once they are imported as points the next step is to super impose it over a basemap to access for accuracy.  This is important for the final creation so the views can see how accurate the points are.

Step Ten: Use the Bearing Distance to Line Command to create the lines that go in the direction of the trees


Point Eleven: Next use that feature class and use the Feature Vertices to Points tool. Make sure to use the End parameter on this because we do not need the starting point because that is not a tree.

Step Twelve: Create a quality looking map that shows all of the great data collected in the field.










Figure 4: Final Map showing location of Trees from all three survey areas.




Discussion/Results

There were a few problems that stood out right away when looking at the data when imported into ArcGIS.  First off, The x and y coordinates were backwards and luckily that is an easy fix.  The next was that the group with the red study area was having a problem with their gps.  This may have been due to the ridge which was right next to where the initial point was taken.  The solution to this was to move the point closer to the study area even though it may not be exact.  This will be taken into account to determine the accuracy.  If this was for a professional survey it would need to be redone, but should also have been confirmed from another source to make sure it made sense before continuing.  Using this technique can be very useful in some settings when technology fails.  It can be used during any data collection as long as there is one point known, a compass, and a measuring wheel.  It is easy to use as long as the table is set up correctly and and azimuth is set at the correct declination to the area in the world. This can be used to plot trees, plot bomb craters, or plot rock outcrops.  Technology that has replaced this technique includes using distance finders and carrying around a GPS to collect points.  The points can then be Bluetooth transferred over to a tablet that then can have attributes entered in it without having to write down all of the information in a field notebook. The results started off by being off because the points were located a few miles away.  The only way to really fix this was to move it to the correct location using an educated guess.  Next was that the trees and their directions looked really well on the map created.  For group three it looks exactly like the trees this group took.  The results are pleasing except they may not be the most accurate due to being off from the GPS.  Another problem with this is how hard it is to figure out exactly where the study locations were.  It is hard to see through the tree cover making it difficult to distinguish any of the trees.  It would be better to try this survey in an area with less trees to see if the lines actually point to the correct tree.  In my opinion using the newer technology it was very quick to go from point to point.  If using the older technology it was exponentially increase the amount of time spent on each tree to collect datapoints. 




Conclusion



This was a great lab to learn survey techniques that can be used when technology fails.  The only important part is to have a correct GPS point and all the other tools to take distance and azimuth measurements. It is also important to know this because there is no way to predict when technology will fail.  Luckily for us technology made the survey go quicker than having to use the measuring wheel for each tree. The accuracy was about 70% and could have been better if the points were not off due to the ridge.  Also, it can be tedious to do this with the older equipment, but at the end of the day the job needs to be done even if the technology fails.



Photo credits go to Google and Heather Wood.  

Sunday, October 16, 2016

Visualizing Survey Data

Introduction

In the previous lab we created a sandbox that was 114cmx114cm filled with brown sand.  We then created a 6x6cm grid made into 19 parts.  This was to create a sample area that we could measure for height, which was based on the top of the sand box, to use as data for an excel table to be brought into ArcMap.  The term data normalization means to normalize or the processes of organizing data fields.  This was important in this lab because to create the 3D model three numbers were needed to be recorded which included an X, Y, and Z.  Data normalization is also used to improve data integrity because when it is normalized it is set at a standard, therefore making it easier to find and fix mistakes created in the data.  Data normalization also relates to this lab because when taking data from field based surveys there needs to be some kind of normalization to create accurate, smart, and usable data.  The data points collected for this lab are used to show the difference in relief from other data points.  The best way to go about creating a landscape from this data is to use interpolation which is a tool found in Arcmap.  The interpolation creates high values in a raster based on the data points around it.  The different types of interpolation include IDW, Natural Neighbors, Kriging, spline, and TIN.

Methods
To start this lab off the first thing to be done was to create a folder specifically for this project named 'sandbox,' for obvious reasons.  Next, is to create a geodatabase within the sandbox folder.  The next step is to upload the numeric excel file containing x,y, and z data to be added into Arcmap.  Next is using the add XY data to create a new feature class.  The final step was using the different interpolation tools to create a continuous surface map.  The Inverse distance weighted (IDW) (Figure 1) interpolation determines the raster cell value using a linearly weighted combination of sample points. The next tool used is called natural neighbor (Figure 3).  This find the closest subset of input samples to a to a query point.  It then adds weights to them based on proportions.  This method is also known as the "area-stealing" interpolation.  The next interpolation tool used is called the Kriging formula.  This is based off of surrounding measured values to derive a prediction for an unmeasured location (Figure 2). Spline is another type of interpolation tools.  This tool can be thought of as a sheet of rubber that passes through the input points while minimizing the curvature of the surface (Figure 4) The last tool used is called a TIN or triangular irregular network.  This method of interpolation connects points together in a triangle shaped fashion (Figure 5). To use the image from ArcScene in Arcmap The only way to do this is to save it as a layer file, and then save the image as a picture.  Then open the picture up and put it over the top of the layer file which gives the legend information about the map created.  The orientation used is the one most resembling the data originally taken in the survey with the X on the bottom y on the side and the Z as height.  Scale is reflected by____.  It is important because without scale there is no way to tell how much relief there is present.

Data/Discussion #1

In Figure 1, the IDW method, shows a decent example of what the sandbox looked like.  It does actually give the ridge in the bottom right corner more of a multi-point mountain look, and gives the hill in the center a weird middle line extending into the picture.  It did not display the depressions well in the top-right of the map either.  Each one turned out smaller than it should have been.  It also gave the long valley on the left side of the map not nearly as much depth as it looks on the other interpolations.  The next method was kriging, (Figure 2) which was actually the worst at showing the heights of the ridge and hill and lows of the depressions and valley.  It makes it look almost like it is in a 2D look.  The colors make it show where the different structures are, but there just is not really any relief shown.  It also really does not show the ridge in the bottom right corner. This interpolation method also is not showing any of the depressions at all in the top right corner.   This method is also showing that the Hill is the highest point when the ridge and hill are actually the same high values, therefore this is one of the more inaccurate interpolation methods for this survey.    In figure 3 the method of interpolation used is natural neighbors.  This interpolation method did a fair job at showing the relief of the sandbox created.  This one shows the ridge in the bottom right corner.  It shows the hill in the center.  The depressions in the top right are clearly visible and are the correct size. The valley on the left side is actually really well detailed in this method and shows what it looked like in our sand accurately.  Overall, this is the second best method for this survey.  In figure 4 is the spline interpolation method.  This one is the overall best one for displaying the survey of the sandbox.  It shows all of the features listed above very well.  It gives a good display of the relief throughout the model, and it overall looks just like what was created.  This is the most accurate method for the landscape.  The last method was Triangular Irregular Network.  This is not really an interpolation method, but it can be used as one for displaying elevation.  This model does a fair job at showing the relief from the hill to the plains to the valley.  Although, it does not do as well of a job as the spline does.




Figure 1: IDW Interpolation

Figure 2: Kriging Interpolation

Figure 3: Natural Grid Interpolation

Figure 4: Spline Interpolation

Figure 5: TIN 

Revisit Survey

On the remake of the survey Group 2 went out to conduct a more detailed survey of the upper right corner where the depression are located.  To do this we went from every 6x6 cm to 3x6 cm in that area which gave a better recreation of the relief in the top right corner for the depressions along with the ridge being a little more well developed in the bottom right corner. Due to having more data points this is going to make the accuracy of this data more valuable to use for the model.  It will be more precise.  In figure 6 it shows the ridge in the bottom left and even shows where the sand for the ridge came from when the group built it.  This was a little harder to see in figure 1-5, but this one is more detailed and accurate of the survey.
Figure 6: Spline Revised

Conclusion

This survey relates to other field based surveys because it teaches how important it is to normalize the data.  Without having normalized data it makes it a lot more time consuming than it needs to be.  It also relates because the z value does not need to be height. It could be population or number of something else.  This makes this lab extremely valuable to be able to comprehend and use for more than one type of survey.  It is different because it will not be giving an elevation model, but a different kind of model.   It is not always realistic to preform such a detailed grid especially when the area starts to get into the acres of sizes.  This can be extremely difficult to go and take a data point when they are each so far apart, therefore it is important to understand the different kinds of surveying, which was discussed in the last blog post.  Interpolatin can e used for many different types of data.  It could be used to interpolate temperatures for example.  It can take the temperature from data points and use the interpolation tools to create a continuous surface of temperatures, therefore almost any continuous surface map can be used with interpolation.

Monday, October 10, 2016

Sand Box Digital Elevation Surface

Introduction

In this lab we created a terrain in a sand box which included a ridge, a valley, a depression, a hill, a plain, and create it with some creativity.  There are a few different ways to go about this; The first way would be to use random sampling which means to just basically pick random points in the landscape, although this would seem the least effective at creating a model for defining the landscape. Therefore, another way to do it is systematic sampling.  This means to take it in an order from the origin designated.  The way we used was Stratified sampling which means to divide of the landscape into known values.  Sampling is a very useful tool because there is not nearly enough time in the world to sample every single spot to get a value, therefore we need to make interpretations and use our knowledge of the landscape to understand the values needed to make our model of the landscape.  This labs objective is to create the landscape and use a grid system to create points to make a spread sheet to use with a computer program that will make a digital representation of our terrain that we created.  



Figure 1:  A picture showing the topography and the grid system used to take our sample points.  





Methods

The sampling techniques used for this lab was sequential sampling because it makes the most sense with the terrain created, and it seemed like it would be the fastest way to get the project done to create our model.  Another method that is similar is the random sampling method.  it is similar because it is taking fewer points than the systematic sampling, but with stratified sampling there will be more accuracy because we will be taking points that are similar and combining them together. The materials being used in this lab include a sandbox made out of wood along with a large amount of brown sand. Also being used is a meter stick, string for making sample locations into a grid making it easier to identify.  The sampling scheme was set up at 6 cm x 6 cm. This was chosen because it is small enough to get some data collected, but large enough that there is not an overwhelming amount of data being taken.  It was also set up with an origin in one of the corners and then each point correlates with 2 points like a general x-y chart.  There is also a Z value to give it a value of elevation based off of our decision of putting sea level.  The zero elevation was defined by the top of the sandbox.  The data was entered by going 1-1-Z , 2-1- Z, 3-1-Z, etc.  We choose this data entry method because it makes it easy to enter into a spreadsheet, and it is easy to read and go back to create if needed.  

Results/Discussion

The resulting number of sample points was 321.  The minimum number was -14 and the maximum number was +4.  the mean -2, and the standard deviation was +3.  The sampling we took did relate to the method that we choose, and the group stuck to the original plan to use each grid square as one point.  Therefore, we will not have any inaccuracy due to changing methods at all. Some problems that were encountered were that the grids we created were not completely tight making some slack more than others. Another problem was trying to average the terrain over a square.  We took the average of 4 points and used that number, and we tried to make the string as tight as we could.


Figure 2:  This is a grid showing the data we collected for each square 6cmX6cm.  It is actually remarkable that numbers can show valleys, hills and ridges so well. 

Conclusion 

It relates to using the sequential method because we took every grid mark as a point in an order that can easily be used to create a digital model. It also relates to the other methods because  it makes a large area and takes small points of it to make a model which has interpretation in it.  It is important to use sampling in spatial situations because there is no possible way to get every single small point in the sampling area.  It is nearly impossible to do it, but a small sample spread out throughout the area makes it easier to understand the data and has the ability to allow smart interpretation.  This activity relates to sampling larger areas because it teaches strategies to handle larger areas which include setting up a grid, taking an average value of what is being addressed in the grid and creating a model of the area.  The numbers gathered did provide an okay amount of detail and did show the high and lows within the map, but there should be some areas that need more detail.  To refine the survey it would be beneficial to make the areas with higher relief to have a smaller grid size then 6x6cm; it would possibly be better to make it 3x3 just to get mroe data in those regions.  

Tuesday, October 4, 2016

Part 3: HadleyVille Cemetery


Introduction
The problem at hand currently is that the Hadleyville Cemetery lost most of its data and is unable to quickly look at a map and be able to tell what is going on in the area.  To read more about the problem and methods used please visit my last blog at link

Figure 1: Image from Google Maps Showing the Hadleyville Cemetery located just outside of Eau Claire, WI going south on Highway 93 and turning Right on HH following this until there will be a cemetery on the left.

Methods

There were a few different tools used to conduct the survey.  The class used an UAS to take a high resolution picture with 95% coverage of the mapping area.  Also used was a survey grade GPS which turned out to not be needing since the mapping area was in such high-res that the grave sites were easiy enough to digitize ourselves in Arcmap.  The class also used cameras for taking pictures of the graves along with taking handwritten notes by assigning different rows to groups.  Since we were able to get a high-resolution photo we did not have to spend the large amount of time on going to each 150+ grave sites to take a data point.  This overall saved about a whole class time (2.5 hrs).
The data was recorded by having to write down all the crucial information that we came up with as a class down at the one day that we had at the site. This data included names, date of birth, date of death, quality of stone, type of stone, and if it was readable.  A pure digital approach is not always best because something could get lost or go missing easily, therefore it is always handing to keep something as a hard copy.  The media types that are being used for data collection are: hand written, High-res aerial photos and photographs.  The format was taking a row at a time and collect the information going from the road to the back of the cemetery. 
The hard data was transferred into a Google Sheets page, which was the most difficult part to create due to having many different styles of taking notes.  This was where the problems started to pop up for the class.  Having to normalize the data was a feat all in itself.  The Google Sheets page was one of the better ideas used rather than creating an Excel file to send around waiting for everyone to fill it in. The class saved a lot of time because everyone that needed to fill in data could do it at the same time while having normalized headings and row numbers.  Once this was completed and agreed apon there was an image created to help determine the rows and numbers of the grave sites to create an easy to digitize from image.  This made the digitizing faster although there were a few errors that needed to be dealt with, but otherwise it was an overall great idea. 
Figure 2 These are the rows created to normalize the digitizing process created by Marcus Sessler. 

The map of the cemetery was finally digitized and created.  It included a feature class called ‘grave_sites,’ which were the location of all the graves.  This feature class contained an attribute that went along with the map.  It would look something like this: Point ID A1, A2, etc.  Having this normalized with the Google Sheets page which was downloaded as an .csv the ability to join the table to the digitized points became very easy and turned out to look very well.  Also being able to digitize just from the image made the process a lot quicker because the pixels were able to give great detail. 

Results

Attribute table:
Figure 3:  This shows an attribute table of the data collected when joined into my point data on ArcMap.
 How the interaction works:
Figure 4: This image is showing how the map is interactive and that each grave stone is connected to a picture along with all of its data. 


Map:
Figure 5: This is the finished map created. It has the ability to click on each grave site to give a picture and information about that grave. 

There was a lot more time spent on data collection rather than on the actual GIS itself.  This was from all of the different forms of data collection methods. Having to normalize this was the most difficult because of taking the time to decide what was going to be the correct way to show the information.  Although, having spent this much time on data collection, it made the GIS part a breeze and was actually the least time consuming part of the entire project. To remedy this situation, the class came together to create a normalization of the spreadsheet. The survey GPS was not used in this project due to how long it would have taken to complete each of the points.  Some sources of error might include placement of digitizing graves, spelling in the spreadsheet, human errors in the notes, and wrong picture attached to each grave. 
To save more time and create a better way to collect data the class could have divided up the cemetery before going to the location which would have saved some time of standing around figuring out what to do.  Another way would have been to have just one group using the GPS the entire time to actually get a GPS point for each location of the grave stones.  Another would have been to have just one person to take pictures in the order that was previously decided on before the class went out into the field.  Another way would have been to already have chosen to have been using Google Sheets so the class could have entered all of the information before the next class period.  Although, having all of this already decided upon would mean having more detailed information on the cemetery. 

Conclusion


The methods transferred well to solve the goal of the project.  After everything was agreed upon the whole process was very smooth. Having all of the graves written down and taken pictures made the project overall a simple project with just a table join.  The mixed formats of data collection may have lowered the accuracy, but once it was normalized the ability to see if there were any problems became easy with around 20 people looking at the data.  The potential sources of error are negligible because the final product overall looks great and shows an accurate interactive map of the cemetery.  The overall success of the survey is very high because it ended up looking very well and gives all the data needed for the cemetery to use.  It shows exactly where all of the plots are along with all of the data with each plot.  This GIS will be used to enter new plots created at the cemetery along with being able to show people looking for plots the locations that are open to be purchased.  It could also be used to find family lineages and the location of deceased family members.