Welcome to the first crowdsourcing evaluation 


We aim at comparing the behavior of 5 different agents on a set of 4 videos coming from very different sources, with a crowdsourcing-based evalutation criterion. For each video, we gave a few supervisions on a limited number of classes, which will be detailed in the pages describing the agents. We invite users to examine the videos, and rate (some or all of) them with a score ranging from 0 up to 5 stars.


             

Agent code name Brief description Current score
CS001Base architecture
CS002Base architecture without motion constraints
CS003Base architecture with more memory for higher layers
CS004Base architecture which received more supervisions
CS005Base architecture which processed a longer sequence during learning