This illustrative example is based on a wire-frame model of a 3D room made using a 3D extension of the simple line-drawing 'definitive notation' donald. (The model itself was built by Andy MacDonald in his third-year project in 1997-8; the projection function he exploited was implemented by Richard Cartwright.) The idea is to provide an environment in which you can experiment to understand the geometric constructions involved in displaying a 3D scene. Good sources for the underlying mathematics are the lecture notes \"05 3D Transforms and Viewing\" and Hearn and Baker chapter (especially sections 7-2 to 7-4 and 7-8).

"; htmlpage_slide2 is "The 3D room model has been placed in the left-hand window, with some of the original features removed. What remains is a viewing interface through which (by clicking the left mouse in the PLAN and ELEVATION windows) you can select a position on the x-y floor plan of the room and an elevation above that point. This determines the **viewing position** [H&B, p351]. You can observe the effects of changing the viewing position visually, but can also inspect the redefinitions that they effectively carry out. They concern three variables (hereafter called \"observables\") `_x_pos`

, `_y_pos`

, `_z_pos`

which are the coordinates of the viewing position in the world frame.

Inspection of values is normally carried out in 'eden' input mode - you can select this by prefacing a segment of input by

%edenor by selecting the appropriate radio-button in the EDEN interface. For instance, to inspect the observable

`_x_pos`

, you can either type"//scriptformat("scriptvar1")//"or"//scriptformat("scriptvar2")//"and consult the EDEN output window for the values.
";
scriptvar3 = "/* monitor key viewing observables re position and direction of observing */
proc info: _x_pos, _y_pos, _z_pos, _x_dir, _y_dir, _z_dir {
writeln(_x_pos, \" \",_y_pos, \" \", _z_pos, \" \", _x_dir, \" \",_y_dir, \" \", _z_dir);
}
";
scriptvar4 = "_x_pos = 0; _y_pos = 0; _z_pos = 100;
_x_dir = 100; _y_dir = 100; _z_dir = 0;";
htmlpage_slide3 is "
By clicking the right mouse in the interface windows, you can change the **viewing direction**, both left to right, and up and down. The observables that determine the viewing direction are `_x_dir`

, `_y_dir`

, `_z_dir`

: their values can also be inspected in a similar fashion. To monitor the current values of all six of these observables so that their values are written out whenever they change you can introduce a triggered action such as:"//scriptformat("scriptvar3")//"

You can also redefine observables directly through the EDEN input window (this is primarily what you'll have to do in order to explore the viewing process more deeply). For instance, the initial values for the six observables introduced so far can be restored by making the set of redefinitions:"//scriptformat("scriptvar4")//"The use of redefinitions to restore state in this way is a general form of undo that can be used at any time. (You may also find it useful to be able to retrieve previous input to the EDEN window using the key combination `control-alt-uparrow`

, and to examine or store to a file the entire history of your interaction using the `View/View history...`

option.)

Having understood how to manipulate definitions in this way, you should focus on interpreting the definitions that you make according to how they affect the view. For instance, after doing a little experiment you should appreciate that, initially, the viewpoint is half way up the edge of the room that is facing you, that you are looking out at an angle of 45 degrees horizontally across the room, and that the room is not square.

"; scriptvar5 = "for (i=0; i<=200; i++) { _y_pos = i; eager(); }"; scriptvar6 = "_y_dir is 200-_y_pos;"; htmlpage_slide4 is "
If you want to automate a sequence of updates, you can use a simple form of iteration (first making sure that the loop variable is not itself a key observable in the model!). For instance - from the initial situation - to move the viewing position across to the midpoint of the front wall on the left, you can update the observable `_y_dir`

using a simple for-loop:"//scriptformat("scriptvar5")//"[The `eager()`

procedure here serves a technical function, ensuring the changes made in a for-loop are treated as individual redefinitions of state, not as part of a single conceptually atomic procedural update.]

An important feature of the EDEN interpreter is that you can specify dependencies between observables that will then be automatically maintained (as in a spreadsheet). For instance, if you want to change the viewing direction at the same time as you move the viewing position along the left front wall, you can introduce a definition such as:"//scriptformat("scriptvar6")//"and then invoke the same changes to `_y_pos`

as before. The behaviour of the arrow in the PLAN window then gives you a visualisation of the viewing strategy to complement the actual display.

There is one further slightly more complex activity that is useful in exploring the viewing mechanism. It is possible for you to introduce your own geometric elements into the display. For instance, we might like to introduce a point on the floor of the room and paint a line from it into the far left corner of the room."//scriptformat("scriptvar7")//"

"; scriptvar8 = "%eden /* Specification of view */ view_cen is [_x_pos,_y_pos,_z_pos]; view_norm is [-_x_dir, -_y_dir, -_z_dir]; view_theta is atan2(_y_dir,_x_dir); view_up is [-_z_dir*cos(view_theta), -_z_dir*sin(view_theta), sqrt(_x_dir*_x_dir + _y_dir*_y_dir)]; eye_dist = 200; view_width = 200; view_height = 200; /* Calculate view plane x and z axis */ view_x is unit(cross(view_up,view_norm)); view_z is unit(view_norm);"; htmlpage_slide6 is "There are two interrelated aspects to the model to be understood: the geometric constructions that can be used to project from 3D into 2D, and the way in which this can be exploited in a fashion that resembles the configuration and use of a camera. We shall consider the geometric construction first.

The projection is obtained by determining two geometric elements: the viewplane and a centre of projection C. You can think of the centre of projection as the point at which the eye is sited, and the viewplane as a screen onto which any point p in 3 space gets mapped by drawing a line through p and C and finding out where it meets the viewplane. The line that joins C to the viewing position is normal to viewplane. Its length is the eye distance from the viewplane.

The definitions that set up the view plane are already present in the model:

"//scriptformat("scriptvar8")//""; scriptvar9 = "/* Definition of view: [Centre, View axis (x,y,z), Eye distance from plane] */ view is [view_cen, view_x, cross(view_z,view_x), view_z, eye_dist];"; scriptvar10 = "view_y is cross(view_z, view_x);"; scriptvar11 = "view is [view_cen, view_x, view_y, view_z, eye_dist];"; htmlpage_slide7 is "

In the model as it stands, the parameters that determine the view are gathered together in a list called `view`

:"//scriptformat("scriptvar9")//"If we introduce the definition:"//scriptformat("scriptvar10")//"then the `view`

list can be revised to the form:"//scriptformat("scriptvar11")//"The five parameters here are respectively:

`view_cen`

the `view_x [`**u**]

the unit vector in the x-direction in the view plane
`view_y [`**v**]

the unit vector in the y-direction in the view plane
`view_z [`**n**]

the unit normal to the view plane
`eye_dist`

the distance from the centre of projection or eyepoint to the view plane
`view_cen`

(the viewing position) always appears exactly in the middle of the projected image, and the vectors `view_x [`**u**]

and `view_y [`**v**]

are always pointing directly upwards and rightwards from the midpoint. To get around this means finding different ways to do the projection - to be explored later!
";
scriptvar12 = "%eden
view_y is cross(view_z, view_x);
view is [view_cen, view_x, view_y, view_z, eye_dist];
/* now put \"physical\" points and lines into the world space
to represent the viewing position and the uvn basis vectors. */
%donald
point proj_view_cen
line xworld
point worldx
xworld = [proj_view_cen, worldx]
line yworld
point worldy
yworld = [proj_view_cen, worldy]
line zworld
point worldz
zworld = [proj_view_cen, worldz]
%eden
axislen = 10;
showaxis = 2;
dispaxislen is (showaxis) ? axislen : 0;
_proj_view_cen is project(view_cen, view);
_worldx is project([_x_pos+view_x[1]*dispaxislen, _y_pos+view_x[2]*dispaxislen, _z_pos+view_x[3]*dispaxislen], view);
A_xworld = \"linewidth=3,arrow=last,color=blue\";
_worldy is project([_x_pos+view_y[1]*dispaxislen, _y_pos+view_y[2]*dispaxislen, _z_pos+view_y[3]*dispaxislen], view);
A_yworld = \"linewidth=3,arrow=last,color=green\";
_worldz is project([_x_pos+view_z[1]*dispaxislen, _y_pos+view_z[2]*dispaxislen, _z_pos+view_z[3]*dispaxislen], view);
A_zworld = \"linewidth=3,arrow=last,color=red\";";
htmlpage_slide8 is ""//scriptformat("scriptvar12")//"The length of the axes being displayed is determined by `axislen`

. (Later on it becomes important to make them bigger, as in some contexts they may be problematic to display if their image is too small.) Whether they are displayed or not is determined by whether `showaxis`

is non-zero.

Can also add some visualisation to show where the origin of the world coordinates lies: this involves placing a circle of radius `showaxis`

at the point [0,0,0] in 3 space, which happens to be at the corner `p1`

of the room."//scriptformat("scriptvar13")//"

**The camera analogy**

Before analysing the projection process more closely, it is helpful to explore the camera metaphor. The mental model for the projection that you need is obtained by thinking of what you see on the EDEN display screen as a selected rectangular part of an infinite plane that is physically within the world space. In this case, the rectangular selection is a `view_width`

by `view_height`

(= 200 by 200) region that is an fact square. The infinite view plane includes the facing edge of the room, and the line from F to the origin of the world space is at right angles to it. You can imagine the (virtual) eye position by supposing that there is a needle of length `eye_dist`

pointing out of the screen along the normal from the viewing position. The end of this needle is the position of the eye, and the visible points in world space are those that lie within the rectangular cone based on the screen display with its apex at the eye position which is also the centre of projection.

Hearn and Baker discuss many respects in which constructing a projection resembles working with a camera:

- The camera location determines the viewing position
- The camera orientation determines the viewing direction
- The camera aperture determines the width and height of the view.

By manipulating parameters, it is possible to simulate things you can do with a camera:

- Panning across a scene
- Moving the camera whilst directing it at a fixed position
- Scanning in every direction from a fixed camera location

Note that in this panning movement, the edges that define the walls can disappear ... an issue taken up below ...

"; scriptvar15 = "/* cf. Figure 7-20 from H&B p354 Viewing an object (a floor level corner of the room - p3) from different directions using a fixed reference point */ _x_dir is 500-view_cen[1]; ## p3=[500,400,0] _y_dir is 400-view_cen[2]; _z_dir is 0-view_cen[3]; _x_pos = _y_pos = 0; for (i=1; i<=100; i++) { _x_pos = _x_pos + 3.7; _y_pos = _y_pos + 1.5; eager(); }"; htmlpage_slide12 is "It is instructive to carry out this scanning process from different locations in the room, and with different values for `_z_dir`

in place. For instance, setting
`_z_dir = -100;`

looks down on a rotating view of the room. Note that, as commented, different settings for `eye_dist`

may need to be selected if the walls are to be displayed at all times, since an edge attached to a corner behind the eye will not be displayed. For some settings, the effect is also as if the eye were looking through transparent walls meeting at a corner in the foreground. The effect of changing the eye distance from the viewplane is more fully explained below.

Another set of interactions with the model is analogous to adjusting the settings on the camera. These include changing the aperture and changing the field of view. The effect of of changing the aperture is similar to that of changing the viewing position.

Consider first what happens when the camera is moved further away from the scene:"//scriptformat("scriptvar17")//"and compare this with what happens if the aperture is widened:"//scriptformat("scriptvar18")//"

"; scriptvar19 = "## first reset the view, but maintain a larger aperture: _x_pos = 0; _y_pos = 0; _z_pos = 100; view_width = 300; ## set the eye distance to something large: eye_dist = 20000;"; scriptvar20 = "eye_dist = 20;"; htmlpage_slide15 is "Another operation that affects the screen image is changing the distance between the eye and the viewplane. Making this distance very large narrows the field of view and brings the image closer and closer to a parallel projection.

"//scriptformat("scriptvar19")//"Making the distance between the eye and the view plane small in contrast means that the field of view widens, so that the features of the room beyond the view plane occupy less and less of the visual field."//scriptformat("scriptvar20")//"Note that this operation leaves the dimensions of features in the view plane unaltered. "; scriptvar21 = "func project { para p, v; auto pers, x, y, z; x = p[1]-v[1][1]; y = p[2]-v[1][2]; z = p[3]-v[1][3]; pers = 1 - (v[4][1]*x + v[4][2]*y + v[4][3]*z)/v[5]; if (pers <= 0) return [CART, @, @]; /* At or behind eye */ else return [CART, (v[2][1]*x + v[2][2]*y + v[2][3]*z)/pers, (v[3][1]*x + v[3][2]*y + v[3][3]*z)/pers]; }"; scriptvar22 = "%donald point floorpoint %eden _floorpoint is project([100,100,1], view);"; htmlpage_slide16 is "
To understand the nature of the 3D to 2D projection, and appreciate the mathematics behind it, it is helpful to study the projection function itself. The key function that is used in the 3D to 2D projection is `project()`

. This is encoded as an `eden`

function:"//scriptformat("scriptvar21")//"which is called with a triple of coordinates for a point in 3D, and the second parameter set to `view`

, as in:"//scriptformat("scriptvar22")//"

To appreciate how `project()`

works, it is useful to inspect the values of the vectors **u**, **v** and **n** in the initial situation, and check that this accords with what is seen in the scene. Recall that:"//scriptformat("scriptvar23")//"where the first element in the list is the viewing position, the next three elements are the coordinates of **u**, **v** and **n** respectively, and the fifth element is the eye distance from the viewplane. Reset the model via"//scriptformat("scriptvar24")//"You can then confirm by using
`writeln(view);`

that the **u**, **v** and **n** basis vectors are [1/sqrt(2), -1/sqrt(2),0], [0,0,1] and [-1/sqrt(2), -1/sqrt(2),0] respectively.

Bearing in mind this interpretation and the specific values for the parameter `view`

, the function `project()`

can now be commented for fuller comprehension:"//scriptformat("scriptvar25")//"In this projection, a key role is played by C, the centre of projection and eye position. Note that, in the computation of `pers`

, points p beyond the viewplane have a scalar product with **n** that is positive, those on the viewplane have scalar product zero, and those between the eye position and the viewplane have scalar product in the range [-`eye_dist`

, 0]. The \"`if (pers<=0)`

\" condition ensures that only points in front of the eye position contribute to the view. The resulting mapping is called **perspective projection**.

To highlight the geometric properties of perspective projection, it is useful to inspect configurations in which there are different numbers of **vanishing points**: points to which parallel lines in 3D space converge in the projected image (H&B, p372). To see a configuration with 2 vanishing points, to which the top and bottom of the two opposite pairs of walls in the room converge when suitably extended, set:"//scriptformat("scriptvar26")//"For a configuration with a single vanishing point, set:"//scriptformat("scriptvar27")//"so as to look down the room from the middle of the wall that lies in the plane x=0. To confirm that top and bottom of the walls on the left and right hand-side of the room do converge to a single point, you can \"paint\" a cross on the backwall, thus:"//scriptformat("scriptvar28")//"

Other projections of the room can be derived by modifying the projection function in small ways. For illustrative purposes, take as an initial configuration the situation:"//scriptformat("scriptvar29")//"It is easy to confirm that small positive values for the eye distance from the viewplane give meaningful images, but negative values do not return any image. This is because of the \"`if (pers<=0)`

\" condition in the `project()`

function.

If we remove this exclusion clause, a more general perspective mapping results."//scriptformat("scriptvar30")//"

The `project()`

function can now be defined to be this more general projection function. A bigger range of values can then be assigned to the eye distance from the view plane. In particular, assigning the values 200, 11, -11, -80 to `eye_dist`

will all yield a room display, though the image is inverted when the value is negative (cf. Figure 7-42, H&B, p370). [Note that a small value of `axislen`

- set to 10 initially, may cause display problems with small values of `eye_dist`

at this point.]

Another natural modification of the projection function can be made simply by treating `v[5]=eye_dist`

directly as if it were infinite in the computation of `pers`

in `project()`

. This gives a direct **orthographic projection** along the line of sight, as if the eye position were infinitely distant from the view plane:"//scriptformat("scriptvar32")//"

To see the result of this projection most effectively, it is best to reset the parameters `view_width`

and `axislen`

:"//scriptformat("scriptvar33")//"

By permuting the u,v and n basis vectors, it is also possible to carry out projections in each of the three orthogonal directions. This is most conveniently achieved by modifying the definition of
the list `view`

so that its components can be reordered by defining a permutation of {1,2,3}:"//scriptformat("scriptvar34")//"

This simple model illustrates how easily the mind is seduced into projecting rich semantics on to rough-and-ready line drawings. Interaction plays an important part in this, as do dependencies between observables. It is important to appreciate how much is lacking from such a model. There is no substance to the idea that there is a solid wall, or even a panel, where walls appear to be. The fact that lines can be displayed only if both their ends are visible is quite unsatisfactory, and this difficulty is not effectively addressed by increasing the eye distance from the viewplane, since that has other unintended consequences for the visualisation. Likewise, there is nothing by way of occlusion or depth cueing to give the observer clues about what is in front of what, or what might in fact be visible: since walls are not represented even as 2D objects they can only be transparent etc. Many of these issues get to be addressed in more sophisticated modelling evironments, but the underlying quality of \"optical illusion\" rather \"substantial reality\" often still persists. Boundary representation (BRep) models do not capture solid content for instance, and their surfaces are conjured out of discrete skeletal frames. This deficiency is part of the motivation for alternative modelling methods, like constructive solid geometry (CSG) and more general frameworks, such as HyperFun.

"; newscriptvar_no=34; html_currentpagename="slide22"; pres_totalslides=22; pres_slideorder=["slide1","slide2","slide3","slide4","slide5","slide6","slide7","slide8","slide9","slide10","slide11","slide12","slide13","slide14","slide15","slide16","slide17","slide18","slide19","slide20","slide21","slide22"]; pres_currentslideno=1;