Visual perception

?
Perception 1 is focused on:
perception, motion and action
1 of 259
light enters the eyes from
the illuminated environment
2 of 259
perceptions starts when
light from the illuminated environment meets our eyes
3 of 259
perception models main question
top-down or bottom-up?
4 of 259
bottom up processing assumed to be a
serial process
5 of 259
what is a serial process?
next step cannot start until current is finished
6 of 259
perception model; bottom-up? stimulus -> attention ->
perception -> thought process -> decision -> action/response
7 of 259
What is said about top-down processing
sometimes our expectations and knowledge influence cognition
8 of 259
rather than?
the stimulus itself
9 of 259
top down processing example - the letter 'B'
if we do not get the full picture, if there's letter around we think it's a 'B' if there's numbers around, we would think it's a 13
10 of 259
top-down or bottom up? questions?
is perception indirect? is perception not driven entirely by the stimulus properties?
11 of 259
additionally,
does perception depend on internal processes? do we need to 'reconstruct' in the external environment
12 of 259
constructivist approach is
top-down
13 of 259
it is the notion that perception is
the end result of a process which begins with sensory stimulation and involve interpreting the information
14 of 259
thus perception is
indirect and relies on internal processes
15 of 259
Richard Gregory's theory is good at explaining
illusions - Eye and brain: the psychology of seeing (1966)
16 of 259
Constructivist approach; sensation ->
interpretation/ inference -> perception
17 of 259
direct perception can also be called?
ecological psychology
18 of 259
it is the idea that awareness of the world (object, patterns etc) is
essentially determined by the info present to the sense
19 of 259
thus, perception is a direct process based on
sensory information
20 of 259
this is outlines in
Gibson's ''The ecological approach to visual perception''
21 of 259
James J Gibson argued for a more
comprehensive view of the visual system
22 of 259
why?
because it has been evolved to allow us to interact with the environment
23 of 259
what is the mainstream psychology view in perception?
the function of our visual system is object recognition
24 of 259
For Gibson, there is no perception without
action and no action without perception
25 of 259
Gibson's approach was/ still is
very radical
26 of 259
It's all about surfaces and
textures
27 of 259
Gibson's work started WWII when asked to train
pilots quickly
28 of 259
he was asked to filter out potential
and non-potential pilots prior to training
29 of 259
what's the most difficult part of flying?
take off and landing
30 of 259
why? to land, you need to know where you are relative to
airstrip, angle of approach, and how to modify angle so that you can aim
31 of 259
therefore, what must be important?
depth perception
32 of 259
but tests based on pictorial cues of depth didn't
work
33 of 259
Surfaces VS
plane (Gibson, 1979)
34 of 259
plane (e.g. horizontal plane) is the
abstract notion of a flat surfaces
35 of 259
it lacks some of the qualities of
a textured surface
36 of 259
such as: a surface is substantial and
is never perfectly transparent
37 of 259
a surface can also only be seen while
the plane only visualised
38 of 259
there is structure in
the surfaces that exist in our environment
39 of 259
which structures the
light that reaches the observer
40 of 259
it is not the mere stimulation by light that
leads to perception; it is the structure of light
41 of 259
Textures - Gibson; started to suggest that we shouldn't look on
depth or space perception
42 of 259
but instead?
on the perception of surfaces in the environment
43 of 259
textured surfaces are all around us (pebbles, sand, grass etc.) and
provide useful information
44 of 259
about?
distance & depth, shape & slant, layout of objects
45 of 259
how is shape & slant in everyday life?
can see stairs, this info comes from shape
46 of 259
textures should give
layouts of objects
47 of 259
optic array is the pattern of light
reaching the eye
48 of 259
why?
it is structured and contains information about the environment
49 of 259
what is a vital property of the optic ray?
it will transform as the observer moves
50 of 259
this allows information from both
the layout and shapes of objects
51 of 259
and about the
observer's movement relative to the world
52 of 259
when the person moves, the whole
perception does
53 of 259
by sitting up we can change the way
the light reaches our eyes
54 of 259
Example of transformation - the fence
each segment represents us stood at a different angle, but it is still the same fence
55 of 259
in transformation - the relative motions in the optic array correlate with
the layout of the objects
56 of 259
optic flow is the apparent
motion of stationary objects and surfaces relative to a mobile observer
57 of 259
optic flow and the focus of expansion - as proposed by
Gibson
58 of 259
Gibson proposed how we can use
optic flow:
59 of 259
he said that when we move in a straight path,
all objects & surfaces appear to expand in a radical pattern
60 of 259
the current direction of the observed is the
origin on a single point
61 of 259
the single point is called
the focus of expansion
62 of 259
during forward movements,
the focus of expansion indicates the direction of travel
63 of 259
in this way, the animal can
see where it is going
64 of 259
to change direction, the animal can
reposition the focus of expansion in that direction
65 of 259
how?
because the focus of expansion always coincides with the direction of instantaneous heading
66 of 259
flow helps us to know
where we are going
67 of 259
optic flow helps us to know
whether we are going left or right
68 of 259
What is gaze?
eye + head position
69 of 259
with mobile gaze the focus of expansion doesn't provide
information about the direction of heading since it is displaced due to eye-movements
70 of 259
optic or retinal flow? Regan and
Beverly (1982) introduced retinal flow
71 of 259
this describes the
pattern that is actually available at the retina
72 of 259
can we decompose the retinal flow pattern to
access info about our instantaneous heading through the focus of expansion
73 of 259
to ways have been proposed, 1?
use decomposition algorithms to recover an estimate of linear heading
74 of 259
2?
use the information from efferent signals already known to the systems to subtract the rotational component introduced by eye or neck movements
75 of 259
The case for mobile gaze - how many points?
4
76 of 259
1?
having mobile gaze = not necessarily detrimental to locomotion
77 of 259
2? when rotational component is
added in the optic flow field by executing eye-movements direction of heading is judged accurately
78 of 259
3? Wilkie and Wann showed
when driving in a stimulated environment, P's performed better when allowed to use natural eye-movements as opposed to visually tracking the middle of the road
79 of 259
4? Wann, et al., went further and
suggested we don't need to retrieve heading at all, we can use retinal flow to get the direction of our future path
80 of 259
flow equalization theory suggests that optic flow
is not just used for heading
81 of 259
the case with
honeybees
82 of 259
as they tend to
fly in the middle of narrow gaps
83 of 259
Srinivasan et al trained honeybees to
fly in the middle of a patterned tunnel that led to food
84 of 259
the walls of the tunnel could move - this provided
different flow speeds at each side of the tunnel
85 of 259
when motion was introduced to one of the tunnel walls
honeybees altered their flying pattern moving towards the side that appeared slow
86 of 259
Tammero & Dickinson, 2002 - works on straight paths
curvilinear trajectories are inherently asymmetrical
87 of 259
flow averaging - in a study where optic flow asymmetries on a
curved trajectory was introduced
88 of 259
instead of p's steering towards slower moving side they
averaged flow speed from the two sides to derive to a global flow speed estimates (Kountouriotis et al 2016)
89 of 259
when slow-moving vectors are clouded we perceive them to be
moving faster
90 of 259
when the faster moving points are occluded, we perceive them to be
moving slower
91 of 259
why?
as we only reply on the central, slower-moving parts
92 of 259
another use of the optic flow field in 1976 by?
Lee
93 of 259
what is this used to calculate?
the time-to-contact with an object or surface - including the strategy by gannets
94 of 259
it could divide our estimate of objects distance by our
estimate of the objects speed
95 of 259
however this info is not
readily available to use
96 of 259
Tau overcomes this since it is using the size of
the retinal imagine of the object
97 of 259
divided by
its rate of expansion
98 of 259
this means that the faster it expands -
the less time is to contact
99 of 259
time-to-contact uses the size of the retinal image of the object
divided by its rate of expansion
100 of 259
the faster it expands =
the less time there is to contact
101 of 259
Affordance - the end product of perception is not
an internal representation of the visual world
102 of 259
but the detection of
affordance
103 of 259
i.e. what does this surface or object
offer to the animal?
104 of 259
all of the potential uses (affordances) of an object are
directly perceivable
105 of 259
objects often can have
several affordances
106 of 259
current psychological state determines
behaviour
107 of 259
different species will perceive different
affordances from the same objects
108 of 259
Affordances example - Warren 1984
showed P's pictures of stairs with differently proportioned steps
109 of 259
then asked P's whether
steps were climbable or unclimbable
110 of 259
Warren 1984 - 24 P's divided into groups;
short groups (5ft4in) and tall (mean height 6ft2)
111 of 259
both groups judged stairways as
unclimbable at a riser height in proportion to their leg lengths
112 of 259
Another affordance example comes from
Will et al 2013
113 of 259
what did they do?
showed P's pics of objects with similar shape but differing in graspability
114 of 259
P's were asked to
life their arm to perform a reach-like movement
115 of 259
the onset of the muscular activity was
faster for graspable objects than non-grapsable
116 of 259
the affordance of graspability is known to
the motor system
117 of 259
summary of VP1.... infra we pick up from textures/ surfaces e.g.
WWII pilots, depth perception, distance & depth, shape & slant and layout ob objects = important
118 of 259
summary of VP1... how optic flow can be used to judge our heading & influence trajectories -
OF; during forward movement focus of expansion indicated direction of travel, to change direction reposition focus of expansion in that direction
119 of 259
summary of VP1... term affordance from a direct perception perspective -
all of potential uses (affordance) of objects = directly perceivable, diff species perceive diff affordances from the same objects
120 of 259
Perception 2:
face recogntion
121 of 259
face recognition is the most common way of
identifying people
122 of 259
face recognition differs from
other forms of object recognition
123 of 259
prosopagnosic P's unable to
recognise familiar faces
124 of 259
this can even extend to
their own face in a mirror
125 of 259
however, they have some ability to
recognise familiar objects
126 of 259
the inability to recognise faces doesn't occur because
they have forgotten the people concerned
127 of 259
as they can still recognise
voices and names
128 of 259
how many reasons have been suggested for prosopagnosia?
2
129 of 259
1?
precise descrimination
130 of 259
PD: been suggested these p's have problems in recognising faces silly because
because more precise discriminations are required to recognise different in faces than differences in objects e.g. chair and table
131 of 259
2?
specific processing mechanisms involved in face recognition
132 of 259
SPM: DeRenzi (1986) - Prosopagnostic p who was very good at making fine discriminations e.g.between Italian coins but
unable to recognise family and friends by sight
133 of 259
Ellis and Young 1988 suggested
there are face-specific processes
134 of 259
Sergent, Ohta and MacDonald (1992) - P's categorised as
living or natural VS non-living, man-made, or categories well-known faces as belonging to actors or non-actors
135 of 259
found; brain areas specifically active in face identification tended to be
forward to those active in object recognition
136 of 259
also discovered; several areas in the right hemisphere - more active in
face identification than object
137 of 259
configurational information - when we recognise a face in a photograph there are
2 major kinds of information we might use:
138 of 259
1...
information about individual features e.g. eye colour
139 of 259
2...
information about configuration or overall arrangement of the features
140 of 259
many approach to face recognition are based on
a feature approach
141 of 259
Young, Hellawell and Hay (1987) constructed faces from photography by
combing the top halves and bottom halves of different famous faces
142 of 259
when the two halves were closely aligned, P's
experienced great difficulty in naming the top halves
143 of 259
however, their performance was much better when
2 halves weren't closely aligned
144 of 259
presumably - close alignment produced
new configuration which interfered with face recognition
145 of 259
Searcy and Bartlett 1996 - reported face processing is not
solely configurational
146 of 259
and that facial distortions in photos were produced in how many different ways?
2
147 of 259
1?
configural distortions - e.g. moving the eyes up and mouth down
148 of 259
2?
component distortions - blurring pulls of the eyes to produce cataracts, blackening teeth and discolouring remaining teeth
149 of 259
the photos were then presented upright or
inverted
150 of 259
and the P's gave them
grotesqueness ratings on a 7-point scale
151 of 259
the findings suggest that
component distortions are readily detected in both upright and inverted faces
152 of 259
whereas, configurable distortions are often
not detected in inverted faces
153 of 259
thus meaning?
configurational and component processing can both be used with upright faces
154 of 259
but the processing of inverted faces is
largely limited to component processing
155 of 259
most research on face recognition has
used photos or other 2D stimuli
156 of 259
there are at least how many potential limitations of such research?
2
157 of 259
1?
viewing an actual 3-D face provides more info for the observer than does a 2-D
158 of 259
2?
people's faces are normally mobile, registering emotional states, agreement or disagreement with what is being said, and so on
159 of 259
none of these dynamic changes over time is
available in photos
160 of 259
Bruce and valentine 1988 - small illuminated
lights were spread over a face, then filmed in the dark so only lights could be seen
161 of 259
P's showed some ability to determine the sex and
identity of each face on the basis of movements of the lights
162 of 259
they were also very good at
identifying expressive movements (such as smiling or frowning)
163 of 259
Models of facial recognition - 2 major theorists
Bruce & Young 1986, Burton & Bruce 1993
164 of 259
B&Y 1986 - there are
major differences in the processing of familiar and unfamiliar faces
165 of 259
familiar faces?
primarily depends on structural encoding, face recognition units, person identity nodes, and name generation
166 of 259
unfamiliar faces?
involves structural encoding, expression analysis, facial speech analysis, and directed visual processing
167 of 259
they argued that
several different types of info that can obtained from faces and which corresponds to the 8 components of their model:
168 of 259
it consists of;
structural encoding, expression analysis, facial speech analysis, directed visual processing, face recognition units, person identity nodes, name generation, cognitive system
169 of 259
structural encoding?
produces various representations or descriptions corresponding apron to those identified within Marr's 1982 model
170 of 259
expression analysis?
Indi's emotional state can be inferred from analysis of their facial features
171 of 259
facial speech analysis?
speech perception can be facilitated by detailed observation of speakers lip movements
172 of 259
directed visual processing?
for certain purposes e.g. decide whether psychologists have beards - specific facial information may be processed selectively
173 of 259
face recognition units?
each face recognition unit contains structural information about one of the faces known to the viewer
174 of 259
person identity nodes
provide info about person concerned e.g. interests, friends, contexts in which encountered
175 of 259
name generation:
person's name stored separately from other info
176 of 259
cognitive system
contains additional info e.g. that actors/ actresses usually have attractive faces
177 of 259
cognitive systems also plays an important part in
determining which component/s of the system receive attention
178 of 259
Bruce 1988 - evidence
lab studies on norm individuals, cog neuropsychology investigations of brain damage P's and diary studies
179 of 259
malone, Morris, kay and levin 1982 - if it were possible to find
P's who show good recognition of familiar faces
180 of 259
but for recognition of
unfamiliar faces
181 of 259
another patients who showed
the opposite pattern
182 of 259
this would provide strong evidence that
the processes involved in the recognition of familiar and unfamiliar faces = different
183 of 259
so what did they do?
obtained evidence in line with these predictions
184 of 259
they tested 1 P who showed
reasonable ability to recognise photographs of famous statesmen
185 of 259
how many correct?
14/17
186 of 259
but he was severely impaired in a task matching
unfamiliar faces
187 of 259
2nd P = quite different -
performed at normal level on matching unfamiliar faces
188 of 259
but had great difficulty in recognising
faces of famous people (5/22)
189 of 259
this indicates that
the name generation component can be accessed only via the appropriate person identity node
190 of 259
the model predicts that one should never be able to
put a name to a face without at the same time having other available info about the person
191 of 259
what does this explain?
why people frequently forget names
192 of 259
young, hay and ellis 1985 - asked P's to keep
diary record of specific face recognition problems experience day-to-day
193 of 259
how many incidents?
1008 altogether
194 of 259
not once did a subject report
putting a name to a face while knowing nothing else about the person
195 of 259
there was 190 occasions where the subject could
remember a fair amount of info about the person but unable to remember name
196 of 259
most brain damaged P's who cannot put
names to faces have great difficulty in naming ordinary objects
197 of 259
in such cases as previously mentioned, it is not simply
name generation component of face-recognition system which is impaired
198 of 259
McKenna and warrington 1980 - patient
GBL
199 of 259
naming problems seemed to be specific to
faces
200 of 259
was able to accurately supply info about
90% of famous people whose photos she saw - could only name 15%
201 of 259
named 80% EU cities and
100% English town
202 of 259
according to the model, another kind of problem should be
fairly common
203 of 259
if appropriate face recognition unit is activated but the person identity node isn't, then
should be a feeling of familiarity coupled with inability to think of any relevant info about them
204 of 259
in set of incidents collected by Young et al 1985 how many occasions was this reported?
233
205 of 259
further predictions ... 1;
when we look at familiar face, familiarity info from face recognition unit should be accessed first
206 of 259
followed by
info about that person from person identity node
207 of 259
followed by
the person's nae from name generation component
208 of 259
basically... familiarity decisions about a face should be made faster than
decisions based on identity nodes
209 of 259
continued... Young et al., 1986b - discovered P's decided whether or not
face = familiar faster than whether or not it was a politicians
210 of 259
decisions based on person identity nodes should be made faster than
those based on the word generation component
211 of 259
1986a - found P's = faster to decide
whether face belonged to political than producing persons name
212 of 259
Cog neuropsychological evidence - practically no brain-damaged p's can
put names to faces without knowing anything else about the person - but several p's show the opposite pattern
213 of 259
flude et al., 1989 - Patient EST - able to retrieve occupations for
85% of very familiar people when presented with faces, but recall only 15% of names
214 of 259
overview: convincing evidence that - the model of B&Y 1986 provides
coherent account of various kinds of info about faces and ways in which these kinds of info are related to each other
215 of 259
overview: convincing evidence that - several different components =
involved in face processing
216 of 259
overview: convincing evidence that - differences in the processing of familiar and
unfamiliar faces are clearly identified
217 of 259
overview: convincing evidence that - familiar and unfamiliar faces are
typically processed quite differently
218 of 259
overview: convincing evidence that - the model proposed by B&Y 1986 is on the right lines:
a) information about familiar faces accessed sequentially and b) the order in which different kinds of info are accessed also corresponds to theoretical assumptions
219 of 259
The main inadequacies of the model relate to: 1.
insufficient specification of some of the components and processes involved in face recognition:
220 of 259
a) the cog system - B&Y 1986 serves to
catch all those aspects of processing not reflected in other components of our model
221 of 259
b) the account of the processing of unfamiliar faces is
much less detailed than the one offered for familiar faces
222 of 259
it has been found with both familiar and unfamiliar faces that
the speed and accuracy of recognition is affected by the context in which a face is presented
223 of 259
with familiar faces contextual information about individual's occupation/ where they have been previously encountered activates
person identity node - and in turn activates appropriate face recognition unit facilitating the recognition of a face as familiar
224 of 259
in the case of unfamiliar faces,
context effects have also been found
225 of 259
example - unfamiliar faces are recognised better if hey are shown for a second time against
same background context as the first showing
226 of 259
however the previous finding is
hard to incorporate within the model
227 of 259
overview: convincing evidence that - 2, evidence is inconsistent with the assumption that
names can be accessed only via relevant autobiographical info stored at the person identity node
228 of 259
amnestic P - ME could match
faces and names of 88% of famous people for whom she was unable to recall any autobiographical info
229 of 259
overview: convincing evidence that - 3, important for theory that some p's show better recognition for
familiar faces than unfamiliar faces, whereas others show the opposite pattern
230 of 259
this double dissociation was obtained by
Malone et al 1982 but has proved difficult to replicate
231 of 259
example - Young et al. 1993 - unsuccessfully studied
34 brain-damaged men
232 of 259
5 of the p's had
selective impairment of expression analysis - but there was much weaker evidence of selective impairment of familiar or unfamiliar face recogniton
233 of 259
young et al 1993 - argued that
previous research may have produced misleading conclusions because of meth limitations
234 of 259
interactive activation and competition model - Burton and Bruce (1993) developed
the bruce and young model 1986
235 of 259
one of the features is that there is separate store for names which can only be accessed
via relevant autobiographical info stored at person identity node
236 of 259
Dehaan et al., 1991 contradict this - they investigated
amnestic P - ME
237 of 259
she was able to match faces and names of
88% of famous people for whom she unable to recall autobiographical information
238 of 259
the fact her PINs were damaged should have prevented
matching names and faces
239 of 259
Burton, bruce and Johnston 1990/ Burton and bruce 1993
revised and developed bruce and young model
240 of 259
assumed there were how many pools of information?
3
241 of 259
1 - FRU
face recognition units - contains stored info about specific faces
242 of 259
2 - PINs
person identity nodes - gateways into semantic info and can be activated by verbal input about people's names as well as by facial input
243 of 259
PINs provide info about the
familiarity of Indi's based on either verbal or facial info
244 of 259
3 - SIUs and NRUs
semantic information unit and name recognition units - contain name and other info about Indi's and names
245 of 259
there are bi-directional excitatory links between
pools - etc unit it linked to others by means of inhibitory connections
246 of 259
a face is recognised as familiar when
level of activity in appropriate PIN reaches threshold level of activation; same mechanism is involved in recognition on basis of name, voice or other information
247 of 259
experimental evidence - the model has been applied to
associative priming effects that have been found with faces
248 of 259
for example; time taken to decide whether face is familiar is reduced when
face of related person is shown immediately beforehand
249 of 259
experimental evidence - according to model the first face activates
SIUs - which feedback activation the PIN of that face and related faces
250 of 259
then then speeds up
familiar decision for the second face
251 of 259
PINs can be activated by both
names and faces - follows that associative priming for familiarity decisions on faces hold be found when the name of a person e.g. prince Phillip, followed by face of a related person e.g. queen Liz
252 of 259
differences between IAC model and Bruce & Young's 1986 Model -
3 points
253 of 259
1 - no separate store for names as both stored in the
SIUs in Burton & bruce 1993 and Buce & young; name info only accessed after autobiographical info
254 of 259
2- familiarity decisions made at the
PIN level rather than FRU
255 of 259
3 - model is more
precise
256 of 259
it can account for findings of DeHaan et al 1991 - the fact that
amnesic patient ME could match names to face in spite of being unable to access autobiographical info is more consistent with Burton and Bruce 1993
257 of 259
Cohen 1990 found faces produced better
recall of names than of occupations when the names were meaningful and the occupations were meaningless
258 of 259
this couldn't happen according to the Bruce and Young 1986 model, but
proposes no problems for the Burton and Bruce 1993
259 of 259

Other cards in this set

Card 2

Front

light enters the eyes from

Back

the illuminated environment

Card 3

Front

perceptions starts when

Back

Preview of the front of card 3

Card 4

Front

perception models main question

Back

Preview of the front of card 4

Card 5

Front

bottom up processing assumed to be a

Back

Preview of the front of card 5
View more cards

Comments

No comments have yet been made

Similar Psychology resources:

See all Psychology resources »See all Cognitive resources »