Skip to content

hazemabbas/Action-Recognition

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Action-Recognition

##Action Recognition on KTH Dataset.

  1. I used the STIP Binaries found at [here] (https://www.di.ens.fr/~laptev/download.html#stip) , to extract the STIPs with the HOG-HOF descriptors.
  2. The descriptors extracted are then clustered -using k-means with N cluster- in order to form a visual codebook with N words.
  3. A Bag of words is then constructed for each example (video sequence) based on the occurrences of the codewords in the given example.
  4. The examples (N+1-vector-BoW+Label) are then classified using Multi-Class non-Linear SVM.

<img src="/images/pipeline.png" width="400" height"200">

###Results :

I used RBF kernel for the SVM, SVM params: [gamma = 0.0002, C = 2]

      Settings           |    HoF with 1000 cluster     |  HoG/HoF with 3000 cluster    |   HoG/HoF with 4000 cluster   |

| ---------------------------|:----------------------------:| :----------------------------:|:-----------------------------:| | Accuracy | 88.98% | 90.07% | 83.89% |

Note: [Hof with 1000 clusters] was by far the fastet, it acheived 500% gain in performance in comparison with [HoG/HoF with 3000 clusters] ####Confusion Matrix of HoG/HoF with 3000 clusters

##References

  • On Space-Time Interest Points. [Ivan Laptev ,2004] [PDF]
  • Evaluation of local descriptors for action recognition in videos. [Piotr Bilinski and Francois Bremond .2009] [PDF]
  • My Bachelor's thesis "Smart Airport Surveillance System" [PDF]

About

Action Recognition using SVM, BoW pipeline

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%