-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding sample notebook for extracting slums from satellite imagery and detecting brick kilns sample. #506
adding sample notebook for extracting slums from satellite imagery and detecting brick kilns sample. #506
Conversation
Check out this pull request on You'll be able to see Jupyter notebook diff and discuss changes. Powered by ReviewNB. |
@cyber-aman could you do the first round of reviews? |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:32Z {review note} Replace introduction with ????
With Growing urbanization demand for bricks is increasing almost universally. Brick production is a very large and traditional industry in many parts of Asia. The brick sector in India, although unorganized, is tremendous in size. India is the second largest brick producer (China dominates with 54% share) in the world [1].
Brick Kiln cause pollution by emitting toxic elements that contain particulate matters rich in rich in carbon particles and high concentration of carbon monoxides and oxides of sulphur (SOx). These are extremely hazardous to health can impact physical and mental growth of children.
There are several guidelines and directives issued by Govt. of India in order to control unplanned expansion and pollution. Central Pollution Control Board (CPCB) issued a directive in June 2017 which asked brick kilns across India to convert to zigzag setting with rectangular kiln shape. This directive clearly stated that brick kilns operating without permission and consent from respective SPCBs would be shut down [2]. Despite the directive, there are many brick kilns operating without following the design norms prescribed by CPCB.
In this study, we will use Deep Learning techniques integrated with ArcGIS tools to detect brick kilns around Delhi NCR area in India that are not built based on directions from CPCB. Deep Learning is a tried and tested method for object detection within images. We will use these techniques to detect brick kilns based on design specified by CPCB. High level steps that we will follow are:
[1] Sheikh, Afeefa. (2014). Brick kilns: Cause of Atmospheric Pollution. [2] https://www.cseindia.org/content/downloadreports/9387 |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:33Z A section is needed to explain what data are we using and how a brick kiln looks like in the data we are using for our model. sandeepgadhwal commented on 2019-09-20T04:50:21Z above the image i have written We will be using ESRI World Imagery Basemap layer to train the model, and for a comparative analysis we will also be using a ESRI World Imagery Basemap layer from year 2014 which we found using the wayback imagery tool.
and below the image On the above image we can see Zig-Zag shaped brick kilns (Blue Marker) and oval shaped brick kilns (Red Marker). |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:34Z We will use
mention values to be passed and ask users to follow tool reference page Input Raster : world_imagery_2019 Output Folder : ..... ..... |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:34Z Can we mention that "after filling all details and running the
What are the necessary files? can these be explained? sandeepgadhwal commented on 2019-09-20T05:02:37Z Should we list out what we export it will be too much information for users.
|
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:35Z We will train our model using |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:36Z Change required > Necessary Imports |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:36Z ** Remove prepare data heading from the list of contents**
We will now use the ... ... ...
<explain what is output of this function and how this will be used in subsequent steps> |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:37Z Visualize Training Data
To make sense of training data we will use the |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:38Z Explain above visualization a bit... |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:39Z Load Model Architecture
Introduce SingleShotDetector model architecture as in this notebook. add references as appropriate |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:40Z Train The Model ** 'keep Find an Optimal Learning Rate' and 'Fit the model' as the subheadings under the main heading i.e., Train The Model** Find an Optimal Learning Rate
Deep learning models are optimized by tuning these hyperparameters. A hyperparameter is a parameter whose value is set before the learning process begins [3]. Learning rate is a key parameter that determines how we adjust weights for our network with respect to loss gradient [4]. Too high learning rate will lead to the convergence of our model to an sub-optimal solution and too low learning can slow down the convergence of our model. We can use the
[3] https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning) |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:40Z Based on the learning rate plot above, we can see that the learning rate suggested by |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:41Z ** indent heading Fit the Model under Train the Model as in previous section **
Epoch defines how many times model is exposed to entire training set. To start, we will use 10 epochs to train our model. |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:42Z Training data is split into training set and validation set at the Prepare Data step. By default training vs validation set proportion is 80% & 20%. Output of Next step is to save the model for further training or inference later. By default, model will be saved into data path specified in the beginning of this notebook. |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:43Z remove helpind words like 'just'. why mention unetclassifier here?
|
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:43Z give a proper name to saved model. eg. ssd_resnet_brick-kiln_01 |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:44Z To retrain a saved model (eg. on new training data), we can load it again using the code below and follow the steps as mentioned in Fit the model step |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:45Z Visualize results in validation set
Its a good practice to see results of the model viz-a-viz Ground Truth. Code below picks a random samples and shows us ground truth and model predictions, side by side. This enables us to preview the results of the model within the notebook. |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:46Z <explain ground truth vs predictions visualization> |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:46Z <FOLLOWING PARAGRAPH NEEDS RECTIFICATION... what is the comparison about and which 2 datasets are used to show the comparison results> We will use saved model to extract classified raster using
for GP tool parameters, just mention values to be passed and ask users to follow tool reference page
|
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:47Z no need for heading Final Output
need to refine output...
|
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:48Z need to expand conclusion and tie to the introduction where we define problem sandeepgadhwal commented on 2019-09-20T06:17:47Z In the cell below it we have added an example of dashboard that can be made using the output. |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). cyber-aman commented on 2019-09-19T09:33:48Z need this image???? sandeepgadhwal commented on 2019-09-20T06:15:38Z Here we are showing our end product from the sample. |
@sandeepgadhwal the dashboard is not loading fully (and it was private): |
I cloned the dashboard but some layers and functionalities failed, I will try to publish here again. |
@sandeepgadhwal could you post a comment that its ready for review after you have reviewed and incorporated @cyber-aman's comments and the dashboard? |
@sandeepgadhwal is this ready for review? |
@AtmaMani this can be reviewed now. |
View / edit / reply to this conversation on ReviewNB (backstory for this conversation format). AtmaMani commented on 2019-12-02T18:39:43Z The validation loss seems quite high. I wonder if we should improve this further. Or we need to justify why the loss is high and why it is acceptable. |
@sandeepgadhwal @cyber-aman Thanks for the sample and detailed reviews. It looks good except the brick klin sample has a high validation loss at 10 epochs. Can we improve it? |
@AtmaMani the validation loss is going down continuously, it means the model is getting trained. We cannot compare the training loss and validation loss directly. |
@sandeepgadhwal agreed. I mistook validation loss as f1 score. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks for the reviews and continuous improvement.
Fixes #2457 arcgis.learn - Slum Change Detection notebook sample.
added sample notebook for extracting slums from satellite imagery.