Description of the first crowdsourcing evaluation 


We aim at comparing the behavior of 5 different agents on a set of 4 videos coming from very different sources, with a crowdsourcing-based evalutation criterion.

  • We gave very few supervisions on a limited number of classes ! It is therefore very hard to identify all the instances of each object !
  • The supervisions fed to the agents were only positive ! This means that the agents do not know that a COAT is not a HAT !
  • The agents have to process frames within a limited amount of time ! If the scene is too complicated, the agents will do their best...
  • All our agents run on ordinary servers (Intel(R) Core(TM) i7-3970X CPU @ 3.50GHz)
  • We evaluate agents on video sequences which were not used when providing supervisions !

We invite users to examine the videos, and rate (some or all of) them with a score ranging from 0 up to 5 stars.

 Start rating now !