Skip to main content
Figure 10 | BMC Neuroscience

Figure 10

From: A neural computational model for bottom-up attention with invariant and overcomplete representation

Figure 10

Two examples where our model shows distinctions. (a1) source image; (b1) the pooling result of the fully connected network, which does not discriminate different orientations, but shows intensity differences. After the surround inhibition shown in (c1), only the part with the strongest intensity contrast is left in the saliency map (d1). (e1) The pooling result of one of the random groups in the randomly connected network, which is similar to the case in the fully connected network; (f1) the surround inhibition performed on (e1); (g1) the final saliency map. (h1) and (i1) two examples of pooling results in our model, which are specific to certain orientation angles; (j1) and (k1) corresponding surround inhibition; (l1) the final saliency map. (m1) human eye tracking. (a2) source image; (b2) the pooling result of the fully connected network, where the saliency of the pedestrian is too weak to be discriminated from the cluttered background. After the surround inhibition shown in (c2), only the part with the strongest intensity contrast is left in the final saliency map (d2). (e2) the pooling result of one of the random groups in the randomly connected network, which is similar to the case in the fully connected network; (f2) the surround inhibition performed on (e2); (g2) the final saliency map. (h2) and (i2) two examples of pooling results in our model, which are specific to certain orientation angles, and the saliency of the pedestrian is strong enough to support the pop-out; (j2) and (k2) corresponding surround inhibition; (l2) the final combined saliency map; (m2) human eye tracking.

Back to article page