The National Academies Logo
Research Associateship Programs
Fellowships Office
Policy and Global Affairs

Participating Agencies - AFRL

  Sign InPrintable View

Opportunity at Air Force Research Laboratory (AFRL)

Bio-Inspired Image Fusion

Location

711 Human Performance Wing, RHX/Human Centered ISR Division

RO# Location
13.15.13.B5699 Wright-Patterson AFB, OH 45433

Advisers

Name E-mail Phone
Warren, Richard richard.warren.5@us.af.mil 937.255.9943

Description

Modern information systems produce a variety of imagery, which must be integrated and assimilated for proper use. For some time, there have been serious attempts to produce algorithms for optimal sensor fusion. The results have been uninspiring. While such algorithms are essential to autonomous multisensor robotic systems, man-in-loop systems might do well to simply optimize presentation for the human observer. Humans effortlessly and pre-attentively fuse information from many sensory subsystems (e.g., form, color, motion) as long as the presentation modes are compatible. How the human brain accomplishes this “binding” feat is controversial, but there is little doubt that the visual cortex exploits spatiotemporal correlations between the sensory subsystems to be bound together. The goal of this research is to investigate several presentation modes for multisensor imagery that would harness and exploit these powerful human abilities.

For example, humans have an astonishing ability to derive a coherent worldview based on limited correlation information. There are many examples. (1) Random dot images with hidden disparity correlations evoke a sense of depth (random dot stereopsis). (2) Orientation correlations between random dots-created by dilation or rotation of a copy pattern relative to an original-give a strong impression of radial or circular symmetry in the random dot patterns (glass patterns). (3) Point light walkers, created by encoding just a few points on the joints of otherwise invisible actors, yield an immediate sense of the identity and actions of the actors; through correlated motion, observers discriminate gender, distinguish sham weightlifting from the real exercise, and readily identify individuals locked together in dance. We believe that this inherent visual information processing capability could be exploited in biological sensor data fusion.

 

Keywords:
Vision; Sensor fusion; Data fusion;

Eligibility

Citizenship:  Open to U.S. citizens
Level:  Open to Postdoctoral and Senior applicants
Copyright © 2014. National Academy of Sciences. All rights reserved. 500 Fifth St. N.W., Washington, D.C. 20001.
Terms of Use and Privacy Statement.