2010年10月17日星期日

Reading #14. Using Entropy to Distinguish Shape Versus Text in Hand-Drawn Diagrams (Bhat)

Comment
Jonathan Hall
Summary
The paper aims to differentiate shapes and texts in hand-drawn diagrams by using entropy. The algorithm in the paper is a context-free method, which only employs the feature of both structures.

The only feature used for classification is the angle of each point. Then all angles are encoded into 7 different symbols according to the value of the angle. The information entropy of  all 7 symbols is calculated, and averaged by the diagonal of the bounding box. A threshold is set to determine which are shapes, which are texts and which are unclassified strokes. Also, a measure of confidence is designed to make the algorithm easily be integrated into other systems.

The algorithm is tested on both seen datasets and unseen datasets. The result of seen datasets is much higher than Patel's, and that of unseen datasets is also higher.  Also, the author employs GZIP entropy instead of zero-order entropy, and find it useful when dealing with repeated patterns.

Discussion
It is a great paper published in IJCAI 09. The idea is very creative and excellent to obtain high accuracy. The feature is more intuitive than those gesture-based ones, like Rubines. The idea works well in diagram recognition, because shapes in a diagram are always more regular than in other domains. It cannot work well in music notes, just because music notes are often not as regular as shapes in a diagram. But it is enough for us to implement in our Project 2. 

Also, the author leaves some strokes as unclassified strokes, which can reduce the error rate and put these strokes into higher-level recognition.  It is a interface for us to integrate other algorithms into it. As the paper says, when unclassified strokes are more, then the accuracy is higher. But we cannot leave such strokes too much.

1 条评论:

  1. I'm just wondering,did you actually used this method for the project 2? But when it comes to complex shapes, the entropy tends to get in similar even to a simple character set, and I'm not sure how successful this method in complext COA diagrams!

    回复删除