i-Perception (May 2012)

What is Binocular Fusion?

  • Stuart Wallis,
  • Mark Georgeson

DOI
https://doi.org/10.1068/id220
Journal volume & issue
Vol. 3

Abstract

Read online

When images in the two eyes are sufficiently similar, they are ‘fused’. Fusion has motor (vergence) and sensory components. When vergence is prevented, sensory ‘fusion’ of disparate images still occurs, but the nature of this fusion has received curiously little attention. Summation of signals from the two eyes is fairly well understood and seems the obvious basis for fusion. But summation of disparate edges should cause the fused edge to appear more blurred. We tested this by studying the perceived blur of single edges with vertical disparities that spanned fusion and diplopia. Single, horizontal, Gaussian-blurred edges (blur, B=1.6 to 40 minarc) were presented to each eye at various disparities (0 to 4B), or were added together in the same eye (monoptic control). Perceived blur was measured by blur-matching, using a two-interval forced-choice method. In monoptic conditions, matched blur increased with disparity in the fusional range (0 to 2B) as expected. But, surprisingly, when the two edges were in different eyes (dichoptic), matched blur remained almost constant and did not increase with disparity. This shows that fusion preserves the sharpness or blur of each eye's image, and that fusion cannot easily be explained by summation or arithmetic averaging of spatial signals across the eyes. We show that fusion of this kind occurs if (a) each monocular signal is the spatial derivative (gradient profile) of the input edge, and (b) binocular combination is the contrast-weighted geometric mean of these signals. This achieves positional averaging (‘allelotropia’) without blurring or smearing.